Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...
What are single photons good for?
NASA Astrophysics Data System (ADS)
Sangouard, Nicolas; Zbinden, Hugo
2012-10-01
In a long-held preconception, photons play a central role in present-day quantum technologies. But what are sources producing photons one by one good for precisely? Well, in opposition to what many suggest, we show that single-photon sources are not helpful for point to point quantum key distribution because faint laser pulses do the job comfortably. However, there is no doubt about the usefulness of sources producing single photons for future quantum technologies. In particular, we show how single-photon sources could become the seed of a revolution in the framework of quantum communication, making the security of quantum key distribution device-independent or extending quantum communication over many hundreds of kilometers. Hopefully, these promising applications will provide a guideline for researchers to develop more and more efficient sources, producing narrowband, pure and indistinguishable photons at appropriate wavelengths.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.
Origin of acoustic emission produced during single point machining
NASA Astrophysics Data System (ADS)
Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.
1991-05-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.
INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE
INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...
A deeper look at the X-ray point source population of NGC 4472
NASA Astrophysics Data System (ADS)
Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.
2017-10-01
In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.
This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...
NASA Astrophysics Data System (ADS)
Neba, Yasuhiko
This paper deals with a maximum power point tracking (MPPT) control of the photovoltaic generation with the single-phase utility interactive inverter. The photovoltaic arrays are connected by employing the PWM current source inverter to the utility. The use of the pulsating dc current and voltage allows the maximum power point to be searched. The inverter can regulate the array voltage and keep the arrays to the maximum power. This paper gives the control method and the experimental results.
Origin of acoustic emission produced during single point machining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heiple, C.R,.; Carpenter, S.H.; Armentrout, D.L.
1991-01-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emissionmore » produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent. 21 refs., 19 figs., 4 tabs.« less
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Non-fluorescent nanoscopic monitoring of a single trapped nanoparticle via nonlinear point sources.
Yoon, Seung Ju; Lee, Jungmin; Han, Sangyoon; Kim, Chang-Kyu; Ahn, Chi Won; Kim, Myung-Ki; Lee, Yong-Hee
2018-06-07
Detection of single nanoparticles or molecules has often relied on fluorescent schemes. However, fluorescence detection approaches limit the range of investigable nanoparticles or molecules. Here, we propose and demonstrate a non-fluorescent nanoscopic trapping and monitoring platform that can trap a single sub-5-nm particle and monitor it with a pair of floating nonlinear point sources. The resonant photon funnelling into an extremely small volume of ~5 × 5 × 7 nm 3 through the three-dimensionally tapered 5-nm-gap plasmonic nanoantenna enables the trapping of a 4-nm CdSe/ZnS quantum dot with low intensity of a 1560-nm continuous-wave laser, and the pumping of 1560-nm femtosecond laser pulses creates strong background-free second-harmonic point illumination sources at the two vertices of the nanoantenna. Under the stable trapping conditions, intermittent but intense nonlinear optical spikes are observed on top of the second-harmonic signal plateau, which is identified as the 3.0-Hz Kramers hopping of the quantum dot trapped in the 5-nm gap.
NASA Technical Reports Server (NTRS)
Allen, C. S.; Jaeger, S. M.
1999-01-01
The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.
Multi-point laser ignition device
McIntyre, Dustin L.; Woodruff, Steven D.
2017-01-17
A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.
2017-12-01
Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.
Growth and characterization of SrI2:Eu2+ single crystal for gamma ray detector applications
NASA Astrophysics Data System (ADS)
Raja, A.; Daniel, D. Joseph; Ramasamy, P.; Singh, S. G.; Sen, S.; Gadkari, S. C.
2018-04-01
Europium activated Strontium Iodide single crystal was grown by vertical Bridgman-stockbarger technique. The melting point and freezing point of SrI2:Eu2+ crystal was analyzed by TG/DTA. The Radioluminescence emission was recorded. The scintillation measurement was carried out for the grown SrI2:Eu2+ crystal under 137Cs gamma energy source.
Single Crystal Diamond Needle as Point Electron Source.
Kleshch, Victor I; Purcell, Stephen T; Obraztsov, Alexander N
2016-10-12
Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.
Single Crystal Diamond Needle as Point Electron Source
NASA Astrophysics Data System (ADS)
Kleshch, Victor I.; Purcell, Stephen T.; Obraztsov, Alexander N.
2016-10-01
Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai
2017-10-01
The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.
On singular and highly oscillatory properties of the Green function for ship motions
NASA Astrophysics Data System (ADS)
Chen, Xiao-Bo; Xiong Wu, Guo
2001-10-01
The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.
MacBurn's cylinder test problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less
Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J
2011-05-21
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
The Chandra Source Catalog 2.0: the Galactic center region
NASA Astrophysics Data System (ADS)
Civano, Francesca Maria; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The second release of the Chandra Source Catalog (CSC 2.0) comprises all the 10,382 ACIS and HRC-I imaging observations taken by Chandra and released publicly through the end of 2014. Among these, 534 single observations surrounding the Galactic center are included, covering a total area of ~19deg2 and a total exposure time of ~9 Ms.The single 534 observations were merged into 379 stacks (overlapping observations with aim-points within 60") to increase the flux limit for source detection purposes.Thanks to the combination of the point source detection algorithm with the maximum likelihood technique used to asses the source significance, ~21,000 detections are listed in the CSC 2.0 for this field only, 80% of which are unique sources. The central region of this field around the SgrA* location has the deepest exposure of 2.2 Ms and the highest source density with ~5000 sources. In this poster, we present details about this region including source distribution and density, coverage, exposure.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the ChandraX-ray Center.
Patton, Gail Y.; Torgerson, Darrel D.
1987-01-01
An alignment reference device provides a collimated laser beam that minimizes angular deviations therein. A laser beam source outputs the beam into a single mode optical fiber. The output end of the optical fiber acts as a source of radiant energy and is positioned at the focal point of a lens system where the focal point is positioned within the lens. The output beam reflects off a mirror back to the lens that produces a collimated beam.
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
NASA Astrophysics Data System (ADS)
Bertoni, Bridget; Hooper, Dan; Linden, Tim
2016-05-01
In a previous paper, we pointed out that the gamma-ray source 3FGL J2212.5+\\linebreak 0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18-33 GeV and an annihilation cross section on the order of σ v ~ 10-26 cm3/s (for the representative case of annihilations to bbar b), similar to the values required to generate the Galactic Center gamma-ray excess.
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
Bertoni, Bridget; Hooper, Dan; Linden, Tim
2016-05-23
In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less
Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertoni, Bridget; Hooper, Dan; Linden, Tim
In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less
Single Crystal Diamond Needle as Point Electron Source
Kleshch, Victor I.; Purcell, Stephen T.; Obraztsov, Alexander N.
2016-01-01
Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2–0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics. PMID:27731379
Design of TIR collimating lens for ordinary differential equation of extended light source
NASA Astrophysics Data System (ADS)
Zhan, Qianjing; Liu, Xiaoqin; Hou, Zaihong; Wu, Yi
2017-10-01
The source of LED has been widely used in our daily life. The intensity angle distribution of single LED is lambert distribution, which does not satisfy the requirement of people. Therefore, we need to distribute light and change the LED's intensity angle distribution. The most commonly method to change its intensity angle distribution is the free surface. Generally, using ordinary differential equations to calculate free surface can only be applied in a point source, but it will lead to a big error for the expand light. This paper proposes a LED collimating lens based on the ordinary differential equation, combined with the LED's light distribution curve, and adopt the method of calculating the center gravity of the extended light to get the normal vector. According to the law of Snell, the ordinary differential equations are constructed. Using the runge-kutta method for solution of ordinary differential equation solution, the curve point coordinates are gotten. Meanwhile, the edge point data of lens are imported into the optical simulation software TracePro. Based on 1mm×1mm single lambert body for light conditions, The degrees of collimating light can be close to +/-3. Furthermore, the energy utilization rate is higher than 85%. In this paper, the point light source is used to calculate partial differential equation method and compared with the simulation of the lens, which improve the effect of 1 degree of collimation.
Computational techniques in gamma-ray skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less
Retrocausation acting in the single-electron double-slit interference experiment
NASA Astrophysics Data System (ADS)
Hokkyo, Noboru
The single electron double-slit interference experiment is given a time-symmetric interpretation and visualization in terms of the intermediate amplitude of transition between the particle source and the detection point. It is seen that the retarded (causal) amplitude of the electron wave expanding from the source shows an advanced (retrocausal) bifurcation and merging in passing through the double-slit and converges towards the detection point as if guided by the advanced (retrocausal) wave from the detected electron. An experiment is proposed to confirm the causation-retrocausation symmetry of the electron behavior by observing the insensitivity of the interference pattern to non-magnetic obstacles placed in the shadows of the retarded and advanced waves appearing on the rear and front sides of the double-slit.
Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.
Streak camera imaging of single photons at telecom wavelength
NASA Astrophysics Data System (ADS)
Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine
2018-01-01
Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.
40 CFR 461.2 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY General Provisions § 461.2 General definitions. In...) “Battery” means a modular electric power source where part or all of the fuel is contained within the unit... heat cycle engine. In this regulation there is no differentiation between a single cell and a battery...
40 CFR 461.2 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY General Provisions § 461.2 General definitions. In...) “Battery” means a modular electric power source where part or all of the fuel is contained within the unit... heat cycle engine. In this regulation there is no differentiation between a single cell and a battery...
Outdoor air pollution in close proximity to a continuous point source
NASA Astrophysics Data System (ADS)
Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul
Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.
Single and Multiple Scattered Solar Radiation
1982-08-30
so that factor can be expected to vary considerably from one scattering point to the next. The monochromatic intensity at the observer due to all of...the single scattering sources within the line-of-sight is obtained by summing over the optical path the product of the source function and the...the observer. Using a dot product 1)etwecen position_ vectors on the unit sphere, it can be Chown that cosA cost coss cost) cos4o + 0 S 0 0 "+ cost
Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long
2013-08-01
In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.
Simulation and source identification of X-ray contrast media in the water cycle of Berlin.
Knodel, J; Geissen, S-U; Broll, J; Dünnbier, U
2011-11-01
This article describes the development of a model to simulate the fate of iodinated X-ray contrast media (XRC) in the water cycle of the German capital, Berlin. It also handles data uncertainties concerning the different amounts and sources of input for XRC via source densities in single districts for the XRC usage by inhabitants, hospitals, and radiologists. As well, different degradation rates for the behavior of the adsorbable organic iodine (AOI) were investigated in single water compartments. The introduced model consists of mass balances and includes, in addition to naturally branched bodies of water, the water distribution network between waterways and wastewater treatment plants, which are coupled to natural surface waters at numerous points. Scenarios were calculated according to the data uncertainties that were statistically evaluated to identify the scenario with the highest agreement among the provided measurement data. The simulation of X-ray contrast media in the water cycle of Berlin showed that medical institutions have to be considered as point sources for congested urban areas due to their high levels of X-ray contrast media emission. The calculations identified hospitals, represented by their capacity (number of hospital beds), as the most relevant point sources, while the inhabitants served as important diffusive sources. Deployed for almost inert substances like contrast media, the model can be used for qualitative statements and, therefore, as a decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: ALMA 106GHz continuum observations in Chamaeleon I (Dunham+, 2016)
NASA Astrophysics Data System (ADS)
Dunham, M. M.; Offner, S. S. R.; Pineda, J. E.; Bourke, T. L.; Tobin, J. J.; Arce, H. G.; Chen, X.; di, Francesco J.; Johnstone, D.; Lee, K. I.; Myers, P. C.; Price, D.; Sadavoy, S. I.; Schnee, S.
2018-02-01
We obtained ALMA observations of every source in Chamaleon I detected in the single-dish 870 μm LABOCA survey by Belloche et al. (2011, J/A+A/527/A145), except for those listed as likely artifacts (1 source), residuals from bright sources (7 sources), or detections tentatively associated with YSOs (3 sources). We observed 73 sources from the initial list of 84 objects identified by Belloche et al. (2011, J/A+A/527/A145). We observed the 73 pointings using the ALMA Band 3 receivers during its Cycle 1 campaign between 2013 November 29 and 2014 March 08. Between 25 and 27 antennas were available for our observations, with the array configured in a relatively compact configuration to provide a resolution of approximately 2" FWHM (300 AU at the distance to Chamaeleon I). Each target was observed in a single pointing with approximately 1 minute of on-source integration time. Three out of the four available spectral windows were configured to measure the continuum at 101, 103, and 114 GHz, each with a bandwidth of 2 GHz, for a total continuum bandwidth of 6 GHz (2.8 mm) at a central frequency of 106 GHz. (2 data files).
The effects of correlated noise in phased-array observations of radio sources
NASA Technical Reports Server (NTRS)
Dewey, Rachel J.
1994-01-01
Arrays of radio telescopes are now routinely used to provide increased signal-to-noise when observing faint point sources. However, calculation of the achievable sensitivity is complicated if there are sources in the field of view other than the target source. These additional sources not only increase the system temperatures of the individual antennas, but may also contribute significant 'correlated noise' to the effective system temperature of the array. This problem has been of particular interest in the context of tracking spacecraft in the vicinity of radio-bright planets (e.g., Galileo at Jupiter), but it has broader astronomical relevance as well. This paper presents a general formulation of the problem, for the case of a point-like target source in the presence of an additional radio source of arbitrary brightness distribution. We re-derive the well known result that, in the absence of any background sources, a phased array of N indentical antennas is a factor of N more sensitive than a single antenna. We also show that an unphased array of N identical antennas is, on average, no more sensitive than a single antenna if the signals from the individual antennas are combined prior to detection. In the case where a background source is present we show that the effects of correlated noise are highly geometry dependent, and for some astronomical observations may cause significant fluctuations in the array's effective system temperature.
An innovative use of instant messaging technology to support a library's single-service point.
Horne, Andrea S; Ragon, Bart; Wilson, Daniel T
2012-01-01
A library service model that provides reference and instructional services by summoning reference librarians from a single service point is described. The system utilizes Libraryh3lp, an open-source, multioperator instant messaging system. The selection and refinement of this solution and technical challenges encountered are explored, as is the design of public services around this technology, usage of the system, and best practices. This service model, while a major cultural and procedural change at first, is now a routine aspect of customer service for this library.
Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States
Puckett, Larry J.
1994-01-01
Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
Single-shot polarimetry imaging of multicore fiber.
Sivankutty, Siddharth; Andresen, Esben Ravn; Bouwmans, Géraud; Brown, Thomas G; Alonso, Miguel A; Rigneault, Hervé
2016-05-01
We report an experimental test of single-shot polarimetry applied to the problem of real-time monitoring of the output polarization states in each core within a multicore fiber bundle. The technique uses a stress-engineered optical element, together with an analyzer, and provides a point spread function whose shape unambiguously reveals the polarization state of a point source. We implement this technique to monitor, simultaneously and in real time, the output polarization states of up to 180 single-mode fiber cores in both conventional and polarization-maintaining fiber bundles. We demonstrate also that the technique can be used to fully characterize the polarization properties of each individual fiber core, including eigen-polarization states, phase delay, and diattenuation.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Zhou, Yongqiang; Jeppesen, Erik; Zhang, Yunlin; Shi, Kun; Liu, Xiaohan; Zhu, Guangwei
2016-02-01
Surface drinking water sources have been threatened globally and there have been few attempts to detect point-source contamination in these waters using chromophoric dissolved organic matter (CDOM) fluorescence. To determine the optimal wavelength derived from CDOM fluorescence as an indicator of point-source contamination in drinking waters, a combination of field campaigns in Lake Qiandao and a laboratory wastewater addition experiment was used. Parallel factor (PARAFAC) analysis identified six components, including three humic-like, two tryptophan-like, and one tyrosine-like component. All metrics showed strong correlation with wastewater addition (r(2) > 0.90, p < 0.0001). Both the field campaigns and the laboratory contamination experiment revealed that CDOM fluorescence at 275/342 nm was the most responsive wavelength to the point-source contamination in the lake. Our results suggest that pollutants in Lake Qiandao had the highest concentrations in the river mouths of upstream inflow tributaries and the single wavelength at 275/342 nm may be adapted for online or in situ fluorescence measurements as an early warning of contamination events. This study demonstrates the potential utility of CDOM fluorescence to monitor water quality in surface drinking water sources. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jurčišinová, E.; Jurčišin, M.
2018-04-01
Anomalies of the specific heat capacity are investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the geometrically frustrated tetrahedron recursive lattice. It is shown that the Schottky-type anomaly in the behavior of the specific heat capacity is related to the existence of unique highly macroscopically degenerated single-point ground states which are formed on the borders between neighboring plateau-like ground states. It is also shown that the very existence of these single-point ground states with large residual entropies predicts the appearance of another anomaly in the behavior of the specific heat capacity for low temperatures, namely, the field-induced double-peak structure, which exists, and should be observed experimentally, along with the Schottky-type anomaly in various frustrated magnetic system.
Scanning Transmission Electron Microscopy at High Resolution
Wall, J.; Langmore, J.; Isaacson, M.; Crewe, A. V.
1974-01-01
We have shown that a scanning transmission electron microscope with a high brightness field emission source is capable of obtaining better than 3 Å resolution using 30 to 40 keV electrons. Elastic dark field images of single atoms of uranium and mercury are shown which demonstrate this fact as determined by a modified Rayleigh criterion. Point-to-point micrograph resolution between 2.5 and 3.0 Å is found in dark field images of micro-crystallites of uranium and thorium compounds. Furthermore, adequate contrast is available to observe single atoms as light as silver. Images PMID:4521050
Surface Imaging Skin Friction Instrument and Method
NASA Technical Reports Server (NTRS)
Brown, James L. (Inventor); Naughton, Jonathan W. (Inventor)
1999-01-01
A surface imaging skin friction instrument allowing 2D resolution of spatial image by a 2D Hilbert transform and 2D inverse thin-oil film solver, providing an innovation over prior art single point approaches. Incoherent, monochromatic light source can be used. The invention provides accurate, easy to use, economical measurement of larger regions of surface shear stress in a single test.
Evaluation of selective vs. point-source perforating for hydraulic fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, P.J.; Kerley, L.
1996-12-31
This paper is a case history comparing and evaluating the effects of fracturing the Reef Ridge Diatomite formation in the Midway-Sunset Field, Kern County, California, using {open_quotes}select-fire{close_quotes} and {open_quotes}point-source{close_quotes} perforating completions. A description of the reservoir, production history, and fracturing techniques used leading up to this study is presented. Fracturing treatment analysis and production history matching were used to evaluate the reservoir and fracturing parameters for both completion types. The work showed that single fractures were created with the point-source (PS) completions, and multiple fractures resulted from many of the select-fire (SF) completions. A good correlation was developed between productivitymore » and the product of formation permeability, net fracture height, bottomhole pressure, and propped fracture length. Results supported the continued development of 10 wells using the PS concept with a more efficient treatment design, resulting in substantial cost savings.« less
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hatkopf, William I.; Raghavan, Deepak
2008-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Raghavan, Deepak
2007-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
NASA Technical Reports Server (NTRS)
Lee, M. C.; Wang, T. G. (Inventor)
1983-01-01
An acoustic levitation system is described, with single acoustic source and a small reflector to stably levitate a small object while the object is processed as by coating or heating it. The system includes a concave acoustic source which has locations on opposite sides of its axis that vibrate towards and away from a focal point to generate a converging acoustic field. A small reflector is located near the focal point, and preferably slightly beyond it, to create an intense acoustic field that stably supports a small object near the reflector. The reflector is located about one-half wavelength from the focal point and is concavely curved to a radius of curvature (L) of about one-half the wavelength, to stably support an object one-quarter wavelength (N) from the reflector.
DARK-FIELD ILLUMINATION SYSTEM
Norgren, D.U.
1962-07-24
A means was developed for viewing objects against a dark background from a viewing point close to the light which illuminates the objects and under conditions where the back scattering of light by the objects is minimal. A broad light retro-directing member on the opposite side of the objects from the light returns direct light back towards the source while directing other light away from the viewing point. The viewing point is offset from the light and thus receives only light which is forwardly scattered by an object while returning towards the source. The object is seen, at its true location, against a dark background. The invention is particularly adapted for illuminating and viewing nuclear particle tracks in a liquid hydrogen bubble chamber through a single chamber window. (AEC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Bernardini, E.; Boersma, D.J.
2005-04-01
The results of a search for point sources of high energy neutrinos in the northern hemisphere using data collected by AMANDA-II in the years 2000, 2001, and 2002 are presented. In particular, a comparison with the single-year result previously published shows that the sensitivity was improved by a factor of 2.2. The muon neutrino flux upper limits on selected candidate sources, corresponding to an E{sub {nu}}{sup -2} neutrino energy spectrum, are included. Sky grids were used to search for possible excesses above the background of cosmic ray induced atmospheric neutrinos. This search reveals no statistically significant excess for the threemore » years considered.« less
Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative
2008-01-01
sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management
Aydin, Ümit; Vorwerk, Johannes; Dümpelmann, Matthias; Küpper, Philipp; Kugel, Harald; Heers, Marcel; Wellmer, Jörg; Kellinghaus, Christoph; Haueisen, Jens; Rampp, Stefan; Stefan, Hermann; Wolters, Carsten H.
2015-01-01
We investigated two important means for improving source reconstruction in presurgical epilepsy diagnosis. The first investigation is about the optimal choice of the number of epileptic spikes in averaging to (1) sufficiently reduce the noise bias for an accurate determination of the center of gravity of the epileptic activity and (2) still get an estimation of the extent of the irritative zone. The second study focuses on the differences in single modality EEG (80-electrodes) or MEG (275-gradiometers) and especially on the benefits of combined EEG/MEG (EMEG) source analysis. Both investigations were validated with simultaneous stereo-EEG (sEEG) (167-contacts) and low-density EEG (ldEEG) (21-electrodes). To account for the different sensitivity profiles of EEG and MEG, we constructed a six-compartment finite element head model with anisotropic white matter conductivity, and calibrated the skull conductivity via somatosensory evoked responses. Our results show that, unlike single modality EEG or MEG, combined EMEG uses the complementary information of both modalities and thereby allows accurate source reconstructions also at early instants in time (epileptic spike onset), i.e., time points with low SNR, which are not yet subject to propagation and thus supposed to be closer to the origin of the epileptic activity. EMEG is furthermore able to reveal the propagation pathway at later time points in agreement with sEEG, while EEG or MEG alone reconstructed only parts of it. Subaveraging provides important and accurate information about both the center of gravity and the extent of the epileptogenic tissue that neither single nor grand-averaged spike localizations can supply. PMID:25761059
Single-Point Attachment Wind Damper for Launch Vehicle On-Pad Motion
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2009-01-01
A single-point-attachment wind-damper device is proposed to reduce on-pad motion of a cylindrical launch vehicle. The device is uniquely designed to attach at only one location along the vehicle and capable of damping out wind gusts from any lateral direction. The only source of damping is from two viscous dampers in the device. The effectiveness of the damper design in reducing vehicle displacements is determined from transient analysis results using an Ares I-X launch vehicle. Combinations of different spring stiffnesses and damping are used to show how the vehicle's displacement response is significantly reduced during a wind gust.
A new type of single-phase five-level inverter
NASA Astrophysics Data System (ADS)
Xu, Zhi; Li, Shengnan; Qin, Risheng; Zhao, Yanhang
2017-11-01
At present, Neutral Point Clamped (NPC) multilevel inverter is widely applied in new energy field. However, it has some disadvantages including low utilization rate of direct current (DC) voltage source and the unbalance of neutral potential. Therefore, a new single-phase five level inverter is proposed in this paper. It has two stage structure, the former stage is equivalent to three level DC/DC converter, and the back stage uses H bridge to realize inverter. Compared with the original central clamp type inverter, the new five level inverter can improve the utilization of DC voltage, and realize the neutral point potential balance with hysteresis comparator.
Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2
NASA Astrophysics Data System (ADS)
Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.
2016-12-01
Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology
THE POPULATION OF COMPACT RADIO SOURCES IN THE ORION NEBULA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbrich, J.; Meingast, S.; Rivilla, V. M.
We present a deep centimeter-wavelength catalog of the Orion Nebula Cluster (ONC), based on a 30 hr single-pointing observation with the Karl G. Jansky Very Large Array in its high-resolution A-configuration using two 1 GHz bands centered at 4.7 and 7.3 GHz. A total of 556 compact sources were detected in a map with a nominal rms noise of 3 μ Jy bm{sup −1}, limited by complex source structure and the primary beam response. Compared to previous catalogs, our detections increase the sample of known compact radio sources in the ONC by more than a factor of seven. The newmore » data show complex emission on a wide range of spatial scales. Following a preliminary correction for the wideband primary-beam response, we determine radio spectral indices for 170 sources whose index uncertainties are less than ±0.5. We compare the radio to the X-ray and near-infrared point-source populations, noting similarities and differences.« less
Environmental monitoring of Galway Bay: fusing data from remote and in-situ sources
NASA Astrophysics Data System (ADS)
O'Connor, Edel; Hayes, Jer; Smeaton, Alan F.; O'Connor, Noel E.; Diamond, Dermot
2009-09-01
Changes in sea surface temperature can be used as an indicator of water quality. In-situ sensors are being used for continuous autonomous monitoring. However these sensors have limited spatial resolution as they are in effect single point sensors. Satellite remote sensing can be used to provide better spatial coverage at good temporal scales. However in-situ sensors have a richer temporal scale for a particular point of interest. Work carried out in Galway Bay has combined data from multiple satellite sources and in-situ sensors and investigated the benefits and drawbacks of using multiple sensing modalities for monitoring a marine location.
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
New organophilic kaolin clays based on single-point grafted 3-aminopropyl dimethylethoxysilane.
Zaharia, A; Perrin, F-X; Teodorescu, M; Radu, A-L; Iordache, T-V; Florea, A-M; Donescu, D; Sarbu, A
2015-10-14
In this study, the organophilization procedure of kaolin rocks with a monofunctional ethoxysilane- 3 aminopropyl dimethyl ethoxysilane (APMS) is depicted for the first time. The two-step organophilization procedure, including dimethyl sulfoxide intercalation and APMS grafting onto the inner hydroxyl surface of kaolinite (the mineral) layers was tested for three sources of kaolin rocks (KR, KC and KD) with various morphologies and kaolinite compositions. The load of APMS in the kaolinite interlayer space was higher than that of 3-aminopropyl triethoxysilane (APTS) due to the single-point grafting nature of the organophilization reaction. A higher long-distance order of kaolinite layers with low staking was obtained for the APMS, due to a more controllable organiphilization reaction. Last but not least, the solid state (29)Si-NMR tests confirmed the single-point grafting mechanism of APMS, corroborating monodentate fixation on the kaolinite hydroxyl facets, with no contribution to the bidentate or tridentate fixation as observed for APTS.
Knox, N C; Weedmark, K A; Conly, J; Ensminger, A W; Hosein, F S; Drews, S J
2017-01-01
An outbreak of Legionnaires' disease occurred in an inner city district in Calgary, Canada. This outbreak spanned a 3-week period in November-December 2012, and a total of eight cases were identified. Four of these cases were critically ill requiring intensive care admission but there was no associated mortality. All cases tested positive for Legionella pneumophila serogroup 1 (LP1) by urinary antigen testing. Five of the eight patients were culture positive for LP1 from respiratory specimens. These isolates were further identified as Knoxville monoclonal subtype and sequence subtype ST222. Whole-genome sequencing revealed that the isolates differed by no more than a single vertically acquired single nucleotide variant, supporting a single point-source outbreak. Hypothesis-based environmental investigation and sampling was conducted; however, a definitive source was not identified. Geomapping of case movements within the affected urban sector revealed a 1·0 km common area of potential exposure, which coincided with multiple active construction sites that used water spray to minimize transient dust. This community point-source Legionnaires' disease outbreak is unique due to its ST222 subtype and occurrence in a relatively dry and cold weather setting in Western Canada. This report suggests community outbreaks of Legionella should not be overlooked as a possibility during late autumn and winter months in the Northern Hemisphere.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Analysis of dangerous area of single berth oil tanker operations based on CFD
NASA Astrophysics Data System (ADS)
Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin
2018-04-01
Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
Using Deep Space Climate Observatory Measurements to Study the Earth as an Exoplanet
NASA Astrophysics Data System (ADS)
Jiang, Jonathan H.; Zhai, Albert J.; Herman, Jay; Zhai, Chengxing; Hu, Renyu; Su, Hui; Natraj, Vijay; Li, Jiazheng; Xu, Feng; Yung, Yuk L.
2018-07-01
Even though it was not designed as an exoplanetary research mission, the Deep Space Climate Observatory ( DSCOVR ) has been opportunistically used for a novel experiment in which Earth serves as a proxy exoplanet. More than 2 yr of DSCOVR Earth images were employed to produce time series of multiwavelength, single-point light sources in order to extract information on planetary rotation, cloud patterns, surface type, and orbit around the Sun. In what follows, we assume that these properties of the Earth are unknown and instead attempt to derive them from first principles. These conclusions are then compared with known data about our planet. We also used the DSCOVR data to simulate phase-angle changes, as well as the minimum data collection rate needed to determine the rotation period of an exoplanet. This innovative method of using the time evolution of a multiwavelength, reflected single-point light source can be deployed for retrieving a range of intrinsic properties of an exoplanet around a distant star.
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
Improved source inversion from joint measurements of translational and rotational ground motions
NASA Astrophysics Data System (ADS)
Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.
2017-12-01
Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Initial application of a dual-sweep streak camera to the Duke storage ring OK-4 source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A.H.; Yang, B.X.; Litvinenko, V.
1997-08-01
The visible and UV spontaneous emission radiation (SER) from the Duke OK-4 wiggler has been used with a Hamamatsu C5680 dual-sweep streak camera to characterize the stored electron beams. Particle beam energies of 270 and 500 MeV in the Duke storage ring were used in this initial application with the OK-4 adjusted to generate wavelengths from 500 nm to near 200 nm. The OK-4 magnetic system with its 68 periods provided a much stronger radiation source than a nearby bending magnet source point. Sensitivity to single-bunch, single-turn SER was shown down to 4 {mu}A beam current at {lambda} = 450more » nm. The capability of seeing second passes in the FEL resonator at a wavelength near 200 nm was used to assess the cavity length versus orbit length. These tests (besides supporting preparation for UV-visible SR FEL startups) are also relevant to possible diagnostics techniques for single-pass FEL prototype facilities.« less
High dose rate brachytherapy source measurement intercomparison.
Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette
2017-06-01
This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR 192 Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single 192 Ir source using their own equipment and local protocols. Results were compared to the 192 Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for 192 Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Possible Very Distant or Optically Dark Cluster of Galaxies
NASA Technical Reports Server (NTRS)
Vikhlinin, Alexey; Mushotzky, Richard (Technical Monitor)
2003-01-01
The goal of this proposal was an XMM followup observation of the extended X-ray source detected in our ROSAT PSPC cluster survey. Approximately 95% of extended X-ray sources found in the ROSAT data were optically identified as clusters of galaxies. However, we failed to find any optical counterparts for C10952-0148. Two possibilities remained prior to the XMM observation: (1) This is was a very distant or optically dark cluster of galaxies, too faint in the optical, in which case XMM would easily detect extended X-ray emission and (2) this was a group of point-like sources, blurred to a single extended source in the ROSAT data, but easily resolvable by XMM due to a better energy resolution. The XMM data have settled the case --- C10952-0148 is a group of 7 relatively bright point sources located within 1 square arcmin. All but one source have no optical counterparts down to I=22. Potentially, this can be an interesting group of quasars at a high redshift. We are planning further optical and infrared followup of this system.
A technique for treating local breast cancer using a single set-up point and asymmetric collimation.
Rosenow, U F; Valentine, E S; Davis, L W
1990-07-01
Using both pairs of asymmetric jaws of a linear accelerator local-regional breast cancer may be treated from a single set-up point. This point is placed at the abutment of the supraclavicular fields with the medial and lateral tangential fields. Positioning the jaws to create a half-beam superiorly permits treatment of the supraclavicular field. Positioning both jaws asymmetrically at midline to define a single beam in the inferoanterior quadrant permits treatment of the breast from medial and lateral tangents. The highest possible matching accuracy between the supraclavicular and tangential fields is inherently provided by this technique. For treatment of all fields at 100 cm source to axis distance (SAD) the lateral placement and depth of the set-up point may be determined by simulation and simple trigonometry. We elaborate on the clinical procedure. For the technologists treatment of all fields from a single set-up point is simple and efficient. Since the tissue at the superior border of the tangential fields is generally firmer than in mid-breast, greater accuracy in day-to-day set-up is permitted. This technique eliminates the need for table angles even when tangential fields only are planned. Because of half-beam collimation the limit to the tangential field length is 20 cm. Means will be suggested to overcome this limitation in the few cases where it occurs. Another modification is suggested for linear accelerators with only one independent pair of jaws.
A double-correlation tremor-location method
NASA Astrophysics Data System (ADS)
Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur
2017-02-01
A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.
AN ACCURACY ASSESSMENT OF MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS
Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to integrate a number of conc...
Excess TDS/Major Ionic Stress/Elevated Conductivities appeared increasing in streams in Central and Eastern Appalachia. Direct discharges from permitted point sources and regional interest in setting eco-based effluent guidelines/aquatic life criteria, as well as potential differ...
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
Appraisal of an Array TEM Method in Detecting a Mined-Out Area Beneath a Conductive Layer
NASA Astrophysics Data System (ADS)
Li, Hai; Xue, Guo-qiang; Zhou, Nan-nan; Chen, Wei-ying
2015-10-01
The transient electromagnetic method has been extensively used for the detection of mined-out area in China for the past few years. In the cases that the mined-out area is overlain by a conductive layer, the detection of the target layer is difficult with a traditional loop source TEM method. In order to detect the target layer in this condition, this paper presents a newly developed array TEM method, which uses a grounded wire source. The underground current density distribution and the responses of the grounded wire source TEM configuration are modeled to demonstrate that the target layer is detectable in this condition. The 1D OCCAM inversion routine is applied to the synthetic single station data and common middle point gather. The result reveals that the electric source TEM method is capable of recovering the resistive target layer beneath the conductive overburden. By contrast, the conductive target layer cannot be recovered unless the distance between the target layer and the conductive overburden is large. Compared with inversion result of the single station data, the inversion of common middle point gather can better recover the resistivity of the target layer. Finally, a case study illustrates that the array TEM method is successfully applied in recovering a water-filled mined-out area beneath a conductive overburden.
A Variable Frequency, Mis-Match Tolerant, Inductive Plasma Source
NASA Astrophysics Data System (ADS)
Rogers, Anthony; Kirchner, Don; Skiff, Fred
2014-10-01
Presented here is a survey and analysis of an inductively coupled, magnetically confined, singly ionized Argon plasma generated by a square-wave, variable frequency plasma source. The helicon-style antenna is driven directly by the class ``D'' amplifier without matching network for increased efficiency while maintaining independent control of frequency and applied power at the feed point. The survey is compared to similar data taken using a traditional exciter--power amplifier--matching network source. Specifically, the flexibility of this plasma source in terms of the independent control of electron plasma temperature and density is discussed in comparison to traditional source arrangements. Supported by US DOE Grant DE-FG02-99ER54543.
NuSTAR Hard X-Ray Survey of the Galactic Center Region. II. X-Ray Point Sources
NASA Technical Reports Server (NTRS)
Hong, Jaesub; Mori, Kaya; Hailey, Charles J.; Nynka, Melania; Zhang, Shou; Gotthelf, Eric; Fornasini, Francesca M.; Krivonos, Roman; Bauer, Franz; Perez, Kerstin;
2016-01-01
We present the first survey results of hard X-ray point sources in the Galactic Center (GC) region by NuSTAR. We have discovered 70 hard (3-79 keV) X-ray point sources in a 0.6 deg(sup 2) region around Sgr?A* with a total exposure of 1.7 Ms, and 7 sources in the Sgr B2 field with 300 ks. We identify clear Chandra counterparts for 58 NuSTAR sources and assign candidate counterparts for the remaining 19. The NuSTAR survey reaches X-ray luminosities of approx. 4× and approx. 8 ×10(exp 32) erg/s at the GC (8 kpc) in the 3-10 and 10-40 keV bands, respectively. The source list includes three persistent luminous X-ray binaries (XBs) and the likely run-away pulsar called the Cannonball. New source-detection significance maps reveal a cluster of hard (>10 keV) X-ray sources near the Sgr A diffuse complex with no clear soft X-ray counterparts. The severe extinction observed in the Chandra spectra indicates that all the NuSTAR sources are in the central bulge or are of extragalactic origin. Spectral analysis of relatively bright NuSTAR sources suggests that magnetic cataclysmic variables constitute a large fraction (>40%-60%). Both spectral analysis and logN-logS distributions of the NuSTAR sources indicate that the X-ray spectra of the NuSTAR sources should have kT > 20 keV on average for a single temperature thermal plasma model or an average photon index of Lambda = 1.5-2 for a power-law model. These findings suggest that the GC X-ray source population may contain a larger fraction of XBs with high plasma temperatures than the field population.
The Chandra Source Catalog : Automated Source Correlation
NASA Astrophysics Data System (ADS)
Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2010-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2009-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.
2014-01-01
Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.
Single-Electron Detection and Spectroscopy via Relativistic Cyclotron Radiation.
Asner, D M; Bradley, R F; de Viveiros, L; Doe, P J; Fernandes, J L; Fertl, M; Finn, E C; Formaggio, J A; Furse, D; Jones, A M; Kofron, J N; LaRoque, B H; Leber, M; McBride, E L; Miller, M L; Mohanmurthy, P; Monreal, B; Oblath, N S; Robertson, R G H; Rosenberg, L J; Rybka, G; Rysewyk, D; Sternberg, M G; Tedeschi, J R; Thümmler, T; VanDevender, B A; Woods, N L
2015-04-24
It has been understood since 1897 that accelerating charges must emit electromagnetic radiation. Although first derived in 1904, cyclotron radiation from a single electron orbiting in a magnetic field has never been observed directly. We demonstrate single-electron detection in a novel radio-frequency spectrometer. The relativistic shift in the cyclotron frequency permits a precise electron energy measurement. Precise beta electron spectroscopy from gaseous radiation sources is a key technique in modern efforts to measure the neutrino mass via the tritium decay end point, and this work demonstrates a fundamentally new approach to precision beta spectroscopy for future neutrino mass experiments.
NASA Astrophysics Data System (ADS)
Khramtsov, Igor A.; Vyshnevyy, Andrey A.; Fedyanin, Dmitry Yu.
2018-03-01
Practical applications of quantum information technologies exploiting the quantum nature of light require efficient and bright true single-photon sources which operate under ambient conditions. Currently, point defects in the crystal lattice of diamond known as color centers have taken the lead in the race for the most promising quantum system for practical non-classical light sources. This work is focused on a different quantum optoelectronic material, namely a color center in silicon carbide, and reveals the physics behind the process of single-photon emission from color centers in SiC under electrical pumping. We show that color centers in silicon carbide can be far superior to any other quantum light emitter under electrical control at room temperature. Using a comprehensive theoretical approach and rigorous numerical simulations, we demonstrate that at room temperature, the photon emission rate from a p-i-n silicon carbide single-photon emitting diode can exceed 5 Gcounts/s, which is higher than what can be achieved with electrically driven color centers in diamond or epitaxial quantum dots. These findings lay the foundation for the development of practical photonic quantum devices which can be produced in a well-developed CMOS compatible process flow.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Performance of Four-Leg VSC based DSTATCOM using Single Phase P-Q Theory
NASA Astrophysics Data System (ADS)
Jampana, Bangarraju; Veramalla, Rajagopal; Askani, Jayalaxmi
2017-02-01
This paper presents single-phase P-Q theory for four-leg VSC based distributed static compensator (DSTATCOM) in the distribution system. The proposed DSTATCOM maintains unity power factor at source, zero voltage regulation, eliminates current harmonics, load balancing and neutral current compensation. The advantage of using four-leg VSC based DSTATCOM is to eliminate isolated/non-isolated transformer connection at point of common coupling (PCC) for neutral current compensation. The elimination of transformer connection at PCC with proposed topology will reduce cost of DSTATCOM. The single-phase P-Q theory control algorithm is used to extract fundamental component of active and reactive currents for generation of reference source currents which is based on indirect current control method. The proposed DSTATCOM is modelled and the results are validated with various consumer loads under unity power factor and zero voltage regulation modes in the MATLAB R2013a environment using simpower system toolbox.
Electric vehicle system for charging and supplying electrical power
Su, Gui Jia
2010-06-08
A power system that provides power between an energy storage device, an external charging-source/load, an onboard electrical power generator, and a vehicle drive shaft. The power system has at least one energy storage device electrically connected across a dc bus, at least one filter capacitor leg having at least one filter capacitor electrically connected across the dc bus, at least one power inverter/converter electrically connected across the dc bus, and at least one multiphase motor/generator having stator windings electrically connected at one end to form a neutral point and electrically connected on the other end to one of the power inverter/converters. A charging-sourcing selection socket is electrically connected to the neutral points and the external charging-source/load. At least one electronics controller is electrically connected to the charging-sourcing selection socket and at least one power inverter/converter. The switch legs in each of the inverter/converters selected by the charging-source/load socket collectively function as a single switch leg. The motor/generators function as an inductor.
2009-02-01
All Sky Survey ( 2MASS ) coordinates of the nucleus were used to verify the coordinates of each observation. The SH and LH staring observations include...isolate the nuclear region in the mapping obser- vations, fluxes were extracted from a single slit coinciding with the radio or 2MASS nuclear...presence of a hard X-ray point source coin- cident with either the radio or 2MASS nucleus and log(LX) 38 erg s−1. The resulting subsample consists of
Churn, M; Jones, B
1999-01-01
A small proportion of patients with adenocarcinoma of the endometrium are inoperable by virtue of severe concurrent medical conditions, gross obesity or advanced stage disease. They can be treated with primary radiotherapy with either curative or palliative intent. We report 37 such patients treated mainly with a combination of external beam radiotherapy and intracavitary brachytherapy using a single line source technique. The 5-year disease-specific survival for nonsurgically staged patients was 68.4% for FIGO Stages I and II and 33.3% for Stages III and IV. The incidence of late morbidity was acceptably low. Using the Franco-Italian Glossary, there was 27.0% grade 1 but no grade 2-4 bladder toxicity. For the rectum the rates were 18.9% grade 1, 5.4% grade 2, 2.7% grade 3, and no grade 4 toxicity. Methods of optimizing the dose distribution of the brachytherapy by means of variation of treatment length, radioactive source positions, and prescription point according to tumour bulk and individual anatomy are discussed. The biologically equivalent doses (BED) for combined external beam radiotherapy and brachytherapy were calculated to be in the range of 78-107 Gy(3) or 57-75 Gy(10) at point 'A' and appear adequate for the control of Stage I cancers.
Laser plasma x-ray source for ultrafast time-resolved x-ray absorption spectroscopy
Miaja-Avila, L.; O'Neil, G. C.; Uhlig, J.; ...
2015-03-02
We describe a laser-driven x-ray plasma source designed for ultrafast x-ray absorption spectroscopy. The source is comprised of a 1 kHz, 20 W, femtosecond pulsed infrared laser and a water target. We present the x-ray spectra as a function of laser energy and pulse duration. Additionally, we investigate the plasma temperature and photon flux as we vary the laser energy. We obtain a 75 μm FWHM x-ray spot size, containing ~10 6 photons/s, by focusing the produced x-rays with a polycapillary optic. Since the acquisition of x-ray absorption spectra requires the averaging of measurements from >10 7 laser pulses, wemore » also present data on the source stability, including single pulse measurements of the x-ray yield and the x-ray spectral shape. In single pulse measurements, the x-ray flux has a measured standard deviation of 8%, where the laser pointing is the main cause of variability. Further, we show that the variability in x-ray spectral shape from single pulses is low, thus justifying the combining of x-rays obtained from different laser pulses into a single spectrum. Finally, we show a static x-ray absorption spectrum of a ferrioxalate solution as detected by a microcalorimeter array. Altogether, our results demonstrate that this water-jet based plasma source is a suitable candidate for laboratory-based time-resolved x-ray absorption spectroscopy experiments.« less
One Source Training: Iowa Community Colleges Leverage Resources through Statewide Collaboration
ERIC Educational Resources Information Center
Saylor, Collette
2006-01-01
Locally governed Iowa Community Colleges are very effective at meeting the needs of local constituencies. However, this focus on local needs can hinder collaborative efforts. The Iowa Associations of Community College Trustees and Presidents determined there was a need for a single point of contact for the development and purchase of training…
75 FR 30842 - Statutorily Mandated Single Source Award Program Name: National Indian Health Board
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-02
... health care advocacy to IHS and HHS based on Tribal input through a broad based consumer network. The.... To assure that health care advocacy is based on Tribal input through a broad-based consumer network... maintenance. B. Organizational Capabilities and Qualifications (30 Points) (1) Describe the organizational...
The Application of Function Points to Predict Source Lines of Code for Software Development
1992-09-01
there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Production Strategies and Applications of Microbial Single Cell Oils
Ochsenreither, Katrin; Glück, Claudia; Stressler, Timo; Fischer, Lutz; Syldatk, Christoph
2016-01-01
Polyunsaturated fatty acids (PUFAs) of the ω-3 and ω-6 class (e.g., α-linolenic acid, linoleic acid) are essential for maintaining biofunctions in mammalians like humans. Due to the fact that humans cannot synthesize these essential fatty acids, they must be taken up from different food sources. Classical sources for these fatty acids are porcine liver and fish oil. However, microbial lipids or single cell oils, produced by oleaginous microorganisms such as algae, fungi and bacteria, are a promising source as well. These single cell oils can be used for many valuable chemicals with applications not only for nutrition but also for fuels and are therefore an ideal basis for a bio-based economy. A crucial point for the establishment of microbial lipids utilization is the cost-effective production and purification of fuels or products of higher value. The fermentative production can be realized by submerged (SmF) or solid state fermentation (SSF). The yield and the composition of the obtained microbial lipids depend on the type of fermentation and the particular conditions (e.g., medium, pH-value, temperature, aeration, nitrogen source). From an economical point of view, waste or by-product streams can be used as cheap and renewable carbon and nitrogen sources. In general, downstream processing costs are one of the major obstacles to be solved for full economic efficiency of microbial lipids. For the extraction of lipids from microbial biomass cell disruption is most important, because efficiency of cell disruption directly influences subsequent downstream operations and overall extraction efficiencies. A multitude of cell disruption and lipid extraction methods are available, conventional as well as newly emerging methods, which will be described and discussed in terms of large scale applicability, their potential in a modern biorefinery and their influence on product quality. Furthermore, an overview is given about applications of microbial lipids or derived fatty acids with emphasis on food applications. PMID:27761130
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
NASA Astrophysics Data System (ADS)
Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han
2016-10-01
One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.
Song, Hajun; Hwang, Sejin; Song, Jong-In
2017-05-15
This study presents an optical frequency switching scheme for a high-speed broadband terahertz (THz) measurement system based on the photomixing technique. The proposed system can achieve high-speed broadband THz measurements using narrow optical frequency scanning of a tunable laser source combined with a wavelength-switchable laser source. In addition, this scheme can provide a larger output power of an individual THz signal compared with that of a multi-mode THz signal generated by multiple CW laser sources. A swept-source THz tomography system implemented with a two-channel wavelength-switchable laser source achieves a reduced time for acquisition of a point spread function and a higher depth resolution in the same amount of measurement time compared with a system with a single optical source.
Britz, Alexander; Assefa, Tadesse A; Galler, Andreas; Gawelda, Wojciech; Diez, Michael; Zalden, Peter; Khakhulin, Dmitry; Fernandes, Bruno; Gessler, Patrick; Sotoudi Namin, Hamed; Beckmann, Andreas; Harder, Manuel; Yavaş, Hasan; Bressler, Christian
2016-11-01
The technical implementation of a multi-MHz data acquisition scheme for laser-X-ray pump-probe experiments with pulse limited temporal resolution (100 ps) is presented. Such techniques are very attractive to benefit from the high-repetition rates of X-ray pulses delivered from advanced synchrotron radiation sources. Exploiting a synchronized 3.9 MHz laser excitation source, experiments in 60-bunch mode (7.8 MHz) at beamline P01 of the PETRA III storage ring are performed. Hereby molecular systems in liquid solutions are excited by the pulsed laser source and the total X-ray fluorescence yield (TFY) from the sample is recorded using silicon avalanche photodiode detectors (APDs). The subsequent digitizer card samples the APD signal traces in 0.5 ns steps with 12-bit resolution. These traces are then processed to deliver an integrated value for each recorded single X-ray pulse intensity and sorted into bins according to whether the laser excited the sample or not. For each subgroup the recorded single-shot values are averaged over ∼10 7 pulses to deliver a mean TFY value with its standard error for each data point, e.g. at a given X-ray probe energy. The sensitivity reaches down to the shot-noise limit, and signal-to-noise ratios approaching 1000 are achievable in only a few seconds collection time per data point. The dynamic range covers 100 photons pulse -1 and is only technically limited by the utilized APD.
Shuttle-promoted nano-mechanical current switch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Taegeun, E-mail: tsong@ictp.it; Kiselev, Mikhail N.; Gorelik, Leonid Y.
2015-09-21
We investigate electron shuttling in three-terminal nanoelectromechanical device built on a movable metallic rod oscillating between two drains. The device shows a double-well shaped electromechanical potential tunable by a source-drain bias voltage. Four stationary regimes controllable by the bias are found for this device: (i) single stable fixed point, (ii) two stable fixed points, (iii) two limit cycles, and (iv) single limit cycle. In the presence of perpendicular magnetic field, the Lorentz force makes possible switching from one electromechanical state to another. The mechanism of tunable transitions between various stable regimes based on the interplay between voltage controlled electromechanical instabilitymore » and magnetically controlled switching is suggested. The switching phenomenon is implemented for achieving both a reliable active current switch and sensoring of small variations of magnetic field.« less
X-ray imaging crystal spectrometer for extended X-ray sources
Bitter, Manfred L.; Fraenkel, Ben; Gorman, James L.; Hill, Kenneth W.; Roquemore, A. Lane; Stodiek, Wolfgang; von Goeler, Schweickhard E.
2001-01-01
Spherically or toroidally curved, double focusing crystals are used in a spectrometer for X-ray diagnostics of an extended X-ray source such as a hot plasma produced in a tokomak fusion experiment to provide spatially and temporally resolved data on plasma parameters using the imaging properties for Bragg angles near 45. For a Bragg angle of 45.degree., the spherical crystal focuses a bundle of near parallel X-rays (the cross section of which is determined by the cross section of the crystal) from the plasma to a point on a detector, with parallel rays inclined to the main plain of diffraction focused to different points on the detector. Thus, it is possible to radially image the plasma X-ray emission in different wavelengths simultaneously with a single crystal.
Anomalous behavior of 1/f noise in graphene near the charge neutrality point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeshita, Shunpei; Tanaka, Takahiro; Arakawa, Tomonori
2016-03-07
We investigate the noise in single layer graphene devices from equilibrium to far-from equilibrium and found that the 1/f noise shows an anomalous dependence on the source-drain bias voltage (V{sub SD}). While the Hooge's relation is not the case around the charge neutrality point, we found that it is recovered at very low V{sub SD} region. We propose that the depinning of the electron-hole puddles is induced at finite V{sub SD}, which may explain this anomalous noise behavior.
Measurement of thickness or plate velocity using ambient vibrations.
Ing, Ros K; Etaix, Nicolas; Leblanc, Alexandre; Fink, Mathias
2010-06-01
Assuming the Green's function is linear with respect to the boundary conditions, it is demonstrated that flexural waves detected by a point receiver and a circular array of point receivers centered on the previous receiver are proportional regardless location of the source and geometry of the plate. Therefore determination of plate velocity or thickness is done from the measurement of ambient vibrations without using any emitter. Experimental results obtained with a plate of non regular geometry excited with a single transducer or a remote loudspeaker are shown to verify the theoretical approach.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420...) Cold rolling mills—(1) Recirculation—single stand. Subpart J Pollutant or pollutant property BCT...) (1) 1 Within the range of 6.0 to 9.0. (b) Cold worked pipe and tube—(1) Using water. Subpart J...
Dual Optical Comb LWIR Source and Sensor
2017-10-12
Figure 39. Locking loop only controls one parameter, whereas there are two free- running parameters to control...optical frequency, along with a 12 point running average (black) equivalent to a 4 cm -1 resolution. .............................. 52 Figure 65...and processed on a single epitaxial substrate. Each OFC will be electrically driven and free- running (requiring no optical locking mechanisms). This
Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to intcgrate a number of concur...
Probabilistic Analysis of Earthquake-Led Water Contamination: A Case of Sichuan, China
NASA Astrophysics Data System (ADS)
Yang, Yan; Li, Lin; Benjamin Zhan, F.; Zhuang, Yanhua
2016-06-01
The objective of this paper is to evaluate seismic-led point source and non-point source water pollution, under the seismic hazard of 10 % probability of exceedance in 50 years, and with the minimum value of the water quality standard in Sichuan, China. The soil conservation service curve number method of calculating the runoff depth in the single rainfall event combined with the seismic damage index were applied to estimate the potential degree of non-point source water pollution. To estimate the potential impact of point source water pollution, a comprehensive water pollution evaluation framework is constructed using a combination of Water Quality Index and Seismic Damage Index methods. The four key findings of this paper are: (1) The water catchment that has the highest factory concentration does not have the highest risk of non-point source water contamination induced by the outbreak of potential earthquake. (2) The water catchment that has the highest numbers of cumulative water pollutants types are typically located in the south western parts of Sichuan where the main river basins in the regions flow through. (3) The most common pollutants in sample factories studied is COD and NH3-N which are found in all catchments. The least common pollutant is pathogen—found present in W1 catchment which has the best rating in the water quality index. (4) Using water quality index as a standardization parameter, parallel comparisons is made among the 16 water catchments. Only catchment W1 reaches level II water quality status which has the rating of moderately polluted in events of earthquake induced water contamination. All other areas suffer from severe water contamination with multiple pollution sources. The results from the data model are significant to urban planning commissions and businesses to strategically choose their factory locations in order to minimize potential hazardous impact during the outbreak of earthquake.
Atmospheric scattering of middle uv radiation from an internal source.
Meier, R R; Lee, J S; Anderson, D E
1978-10-15
A Monte Carlo model has been developed which simulates the multiple-scattering of middle-uv radiation in the lower atmosphere. The source of radiation is assumed to be monochromatic and located at a point. The physical effects taken into account in the model are Rayleigh and Mie scattering, pure absorption by particulates and trace atmospheric gases, and ground albedo. The model output consists of the multiply scattered radiance as a function of look-angle of a detector located within the atmosphere. Several examples are discussed, and comparisons are made with direct-source and single-scattered contributions to the signal received by the detector.
Lessons Learned from OMI Observations of Point Source SO2 Pollution
NASA Technical Reports Server (NTRS)
Krotkov, N.; Fioletov, V.; McLinden, Chris
2011-01-01
The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.
A stereotaxic method of recording from single neurons in the intact in vivo eye of the cat.
Molenaar, J; Van de Grind, W A
1980-04-01
A method is described for recording stereotaxically from single retinal neurons in the optically intact in vivo eye of the cat. The method is implemented with the help of a new type of stereotaxic instrument and a specially developed stereotaxic atlas of the cat's eye and retina. The instrument is extremely stable and facilitates intracellular recording from retinal neurons. The microelectrode can be rotated about two mutually perpendicular axes, which intersect in the freely positionable pivot point of the electrode manipulation system. When the pivot point is made to coincide with a small electrode-entrance hole in the sclera of the eye, a large retinal region can be reached through this fixed hole in the immobilized eye. The stereotaxic method makes it possible to choose a target point on the presented eye atlas and predict the settings of the instrument necessary to reach this target. This method also includes the prediction of the corresponding light stimulus position on a tangent screen and the calculation of the projection of the recording electrode on this screen. The sources of error in the method were studied experimentally and a numerical perturbation analysis was carried out to study the influence of each of the sources of error on the final result. The overall accuracy of the method is of the order of 5 degrees of visual angle, which will be sufficient for most purposes.
Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel
The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.
PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Who pays for agricultural injury care?
Costich, Julia
2010-01-01
Analysis of 295 agricultural injury hospitalizations in a single state's hospital discharge database found that workers' compensation covered only 5% of the inpatient stays. Other sources were commercial health insurance (47%), Medicare (31%), and Medicaid (7%); 9% were uninsured. Estimated mean hospital and physician payments (not costs or charges) were $12,056 per hospitalization. Nearly one sixth (16%) of hospitalizations were either unreimbursed or covered by Medicaid, indicating a substantial cost-shift to public funding sources. Problems in characterizing agricultural injuries and states' exceptions to workers' compensation coverage mandates point to the need for comprehensive health coverage.
The Massive Star-forming Regions Omnibus X-ray Catalog, Second Installment
NASA Astrophysics Data System (ADS)
Townsley, Leisa K.; Broos, Patrick S.; Garmire, Gordon P.; Anderson, Gemma E.; Feigelson, Eric D.; Naylor, Tim; Povich, Matthew S.
2018-04-01
We present the second installment of the Massive Star-forming Regions (MSFRs) Omnibus X-ray Catalog (MOXC2), a compilation of X-ray point sources detected in Chandra/ACIS observations of 16 Galactic MSFRs and surrounding fields. MOXC2 includes 13 ACIS mosaics, three containing a pair of unrelated MSFRs at different distances, with a total catalog of 18,396 point sources. The MSFRs sampled range over distances of 1.3 kpc to 6 kpc and populations varying from single massive protostars to the most massive Young Massive Cluster known in the Galaxy. By carefully detecting and removing X-ray point sources down to the faintest statistically significant limit, we facilitate the study of the remaining unresolved X-ray emission. Through comparison with mid-infrared images that trace photon-dominated regions and ionization fronts, we see that the unresolved X-ray emission is due primarily to hot plasmas threading these MSFRs, the result of feedback from the winds and supernovae of massive stars. The 16 MSFRs studied in MOXC2 more than double the MOXC1 sample, broadening the parameter space of ACIS MSFR explorations and expanding Chandra's substantial contribution to contemporary star formation science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knuth, Eldon L.; Miller, David R.; Even, Uzi
2014-12-09
Data extracted from time-of-flight (TOF) measurements made on steady-state He free jets at Göttingen already in 1986 and for pulsed Ne free jets investigated recently at Tel Aviv have been added to an earlier plot of terminal condensed-phase mass fraction x{sub 2∞} as a function of the dimensionless scaling parameter Γ. Γ characterizes the source (fluid species, temperature, pressure and throat diameter); values of x{sub 2∞} are extracted from TOF measurements using conservation of energy in the free-jet expansion. For nozzles consisting of an orifice in a thin plate; the extracted data yield 22 data points which are correlated satisfactorilymore » by a single curve. The Ne free jets were expanded from a conical nozzle with a 20° half angle; the three extracted data points stand together but apart from the aforementioned curve, indicating that the presence of the conical wall influences significantly the expansion and hence the condensation. The 22 data points for the expansions via an orifice consist of 15 measurements with expansions from the gas-phase side of the binodal curve which crossed the binodal curve downstream from the sonic point and 7 measurements with expansions of the gas-phase product of the flashing which occurred after an expansion from the liquid-phase side of the binodal curve crossed the binodal curve upstream from the sonic point. The association of these 22 points with a single curve supports the alternating-phase model for flows with flashing upstream from the sonic point proposed earlier. In order to assess the role of the spinodal curve in such expansions, the spinodal curves for He and Ne were computed using general multi-parameter Helmholtz-free-energy equation-of-state formulations. Then, for the several sets of source-chamber conditions used in the free-jet measurements, thermodynamic states at key locations in the free-jet expansions (binodal curve, sonic point and spinodal curve) were evaluated, with the expansion presumed to be metastable from the binodal curve to the spinodal curve. TOF distributions with more than two peaks (interpreted earlier as superimposed alternating-state TOF distributions) indicated flashing of the metastable flow downstream from the binodal curve but upstream from the sonic point. This relatively early flashing is due apparently to destabilizing interactions with the walls of the source. If the expansion crosses the binodal curve downstream from the nozzle, the metastable fluid does not interact with surfaces and flashing might be delayed until the expansion reaches the spinodal curve. It is concluded that, if the expansion crosses the binodal curve before reaching the sonic point, the resulting metastable fluid downstream from the binodal curve interacts with the adjacent surfaces and flashes into liquid and vapor phases which expand alternately through the nozzle; the two associated alternating TOF distributions are superposed by the chopping process so that the result has the appearance of a single distribution with three peaks.« less
User's guide for RAM. Volume II. Data preparation and listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D.B.; Novak, J.H.
1978-11-01
The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less
Quantum-Dot Single-Photon Sources for Entanglement Enhanced Interferometry.
Müller, M; Vural, H; Schneider, C; Rastelli, A; Schmidt, O G; Höfling, S; Michler, P
2017-06-23
Multiphoton entangled states such as "N00N states" have attracted a lot of attention because of their possible application in high-precision, quantum enhanced phase determination. So far, N00N states have been generated in spontaneous parametric down-conversion processes and by mixing quantum and classical light on a beam splitter. Here, in contrast, we demonstrate superresolving phase measurements based on two-photon N00N states generated by quantum dot single-photon sources making use of the Hong-Ou-Mandel effect on a beam splitter. By means of pulsed resonance fluorescence of a charged exciton state, we achieve, in postselection, a quantum enhanced improvement of the precision in phase uncertainty, higher than prescribed by the standard quantum limit. An analytical description of the measurement scheme is provided, reflecting requirements, capability, and restraints of single-photon emitters in optical quantum metrology. Our results point toward the realization of a real-world quantum sensor in the near future.
Gatti, Carlo; Macetti, Giovanni; Boyd, Russell J; Matta, Chérif F
2018-07-05
The source function (SF) decomposes the electron density at any point into contributions from all other points in the molecule, complex, or crystal. The SF "illuminates" those regions in a molecule that most contribute to the electron density at a point of reference. When this point of reference is the bond critical point (BCP), a commonly used surrogate of chemical bonding, then the SF analysis at an atomic resolution within the framework of Bader's Quantum Theory of Atoms in Molecules returns the contribution of each atom in the system to the electron density at that BCP. The SF is used to locate the important regions that control the hydrogen bonds in both Watson-Crick (WC) DNA dimers (adenine:thymine (AT) and guanine:cytosine (GC)) which are studied in their neutral and their singly ionized (radical cationic and anionic) ground states. The atomic contributions to the electron density at the BCPs of the hydrogen bonds in the two dimers are found to be delocalized to various extents. Surprisingly, gaining or loosing an electron has similar net effects on some hydrogen bonds concealing subtle compensations traced to atomic sources contributions. Coarser levels of resolutions (groups, rings, and/or monomers-in-dimers) reveal that distant groups and rings often have non-negligible effects especially on the weaker hydrogen bonds such as the third weak CH⋅⋅⋅O hydrogen bond in AT. Interestingly, neither the purine nor the pyrimidine in the neutral or ionized forms dominate any given hydrogen bond despite that the former has more atoms that can act as source or sink for the density at its BCP. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Pan European Phenological database (PEP725): a single point of access for European data
NASA Astrophysics Data System (ADS)
Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M.; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana
2018-06-01
The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via
Matsumoto, Keiichi; Kitamura, Keishi; Mizuta, Tetsuro; Shimizu, Keiji; Murase, Kenya; Senda, Michio
2006-02-20
Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR(+) (Siemens/CTI) , were used. For the transmission scanning, the SET-3000 G/X and ECAT HR(+) were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR(+) was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm(2) to 314 cm(2) to 628 cm(2) (apposition of the two 20 cm diameter phantoms) and 943 cm(2) (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation coefficients was stable with all phantoms. We evaluated the accuracy of attenuation coefficients of Cs-137 single-transmission scans. The results for Cs-137 suggest that scattered photons depend on object size. Although Cs-137 single-transmission scans contained scattered photons, attenuation coefficient error could be reduced using by the segmentation method.
Extended and Point Defects in Diamond Studied with the Aid of Various Forms of Microscopy.
Steeds; Charles; Gilmore; Butler
2000-07-01
It is shown that star disclinations can be a significant source of stress in chemical vapor deposited (CVD) diamond. This purely geometrical origin contrasts with other sources of stress that have been proposed previously. The effectiveness is demonstrated of the use of electron irradiation using a transmission electron microscope (TEM) to displace atoms from their equilibrium sites to investigate intrinsic defects and impurities in CVD diamond. After irradiation, the samples are studied by low temperature photoluminescence microscopy using UV or blue laser illumination. Results are given that are interpreted as arising from isolated <100> split self-interstitials and positively charged single vacancies. Negatively charged single vacancies can also be revealed by this technique. Nitrogen and boron impurities may also be studied similarly. In addition, a newly developed liquid gallium source scanned ion beam mass spectrometry (SIMS) instrument has been used to map out the B distribution in B doped CVD diamond specimens. The results are supported by micro-Raman spectroscopy.
Giant increase in critical current density of K xFe 2-ySe₂ single crystals
Lei, Hechang; Petrovic, C.
2011-12-28
The critical current density Jabc of K xFe 2-ySe₂ single crystals can be enhanced by more than one order of magnitude, up to ~2.1×10⁴ A/cm² by the post annealing and quenching technique. A scaling analysis reveals the universal behavior of the normalized pinning force as a function of the reduced field for all temperatures, indicating the presence of a single vortex pinning mechanism. The main pinning sources are three-dimensional (3D) point-like normal cores. The dominant vortex interaction with pinning centers is via spatial variations in critical temperature T c (“δT c pinning”).
VizieR Online Data Catalog: VLA-COSMOS 3 GHz Large Project (Smolcic+, 2017)
NASA Astrophysics Data System (ADS)
Smolcic, V.; Novak, M.; Bondi, M.; Ciliegi, P.; Mooley, K. P.; Schinnerer, E.; Zamorani, G.; Navarrete, F.; Bourke, S.; Karim, A.; Vardoulaki, E.; Leslie, S.; Delhaize, J.; Carilli, C. L.; Myers, S. T.; Baran, N.; Delvecchio, I.; Miettinen, O.; Banfield, J.; Balokovic, M.; Bertoldi, F.; Capak, P.; Frail, D. A.; Hallinan, G.; Hao, H.; Herrera Ruiz, N.; Horesh, A.; Ilbert, O.; Intema, H.; Jelic, V.; Klockner, H.-R.; Krpan, J.; Kulkarni, S. R.; McCracken, H.; Laigle, C.; Middleberg, E.; Murphy, E.; Sargent, M.; Scoville, N. Z.; Sheth, K.
2016-10-01
The catalog contains sources selected down to a 5σ(σ~2.3uJy/beam) threshold. This catalog can be used for statistical analyses, accompanied with the corrections given in the data & catalog release paper. All completeness & bias corrections and source counts presented in the paper were calculated using this sample. The total fraction of spurious sources in the COSMOS 2 sq.deg. is below 2.7% within this catalog. However, an increase of spurious sources up to 24% at 5.0=5.5 for single component sources (MULTI=0). The total fraction of spurious sources in the COSMOS 2 sq.deg. within such a selected sample is below 0.4%, and the fraction of spurious sources is below 3% even at the lowest S/N (=5.5). Catalog Notes: 1. Maximum ID is 10966 although there are 10830 sources. Some IDs were removed by joining them into multi-component sources. 2. Peak surface brightness of sources [uJy/beam] is not reported, but can be obtained by multiplying SNR with RMS. 3. High NPIX usually indicates extended or very bright sources. 4. Reported positional errors on resolved and extended sources should be considered lower limits. 5. Multicomponent sources have errors and S/N column values set to -99.0 Additional data information: Catalog date: 21-Mar-2016 Source extractor: BLOBCAT v1.2 (http://blobcat.sourceforge.net/) Observations: 384 hours, VLA, S-band (2-4GHz), A+C array, 192 pointings Imaging software: CASA v4.2.2 (https://casa.nrao.edu/) Imaging algorithm: Multiscale multifrequency synthesis on single pointings Mosaic size: 30000x30000 pixels (3.3 GB) Pixel size: 0.2x0.2 arcsec2 Median rms noise in the COSMOS 2 sq.deg.: 2.3uJy/beam Beam is circular with FWHM=0.75 arcsec Bandwidth-smearing peak correction: 0% (no corrections applied) Resolved criteria: Sint/Speak>1+6*snr^(-1.44) Total area covered: 2.6 sq.deg. (1 data file).
Data Foundry: Data Warehousing and Integration for Scientific Data Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musick, R.; Critchlow, T.; Ganesh, M.
2000-02-29
Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data intomore » a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebermeister, Lars, E-mail: lars.liebermeister@physik.uni-muenchen.de; Petersen, Fabian; Münchow, Asmus v.
2014-01-20
A diamond nano-crystal hosting a single nitrogen vacancy (NV) center is optically selected with a confocal scanning microscope and positioned deterministically onto the subwavelength-diameter waist of a tapered optical fiber (TOF) with the help of an atomic force microscope. Based on this nano-manipulation technique, we experimentally demonstrate the evanescent coupling of single fluorescence photons emitted by a single NV-center to the guided mode of the TOF. By comparing photon count rates of the fiber-guided and the free-space modes and with the help of numerical finite-difference time domain simulations, we determine a lower and upper bound for the coupling efficiency ofmore » (9.5 ± 0.6)% and (10.4 ± 0.7)%, respectively. Our results are a promising starting point for future integration of single photon sources into photonic quantum networks and applications in quantum information science.« less
Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems
NASA Astrophysics Data System (ADS)
Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.
2016-12-01
Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.
Ravel, André; Hurst, Matt; Petrica, Nicoleta; David, Julie; Mutschall, Steven K; Pintar, Katarina; Taboada, Eduardo N; Pollari, Frank
2017-01-01
Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF). It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water) collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79%) human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases) whereas exposure to cattle (manure) ranked second (14-19% of attributable cases), the other sources being minor (including beef meat). In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.
Microgravity Experiments Safety and Integration Requirements Document Tree
NASA Technical Reports Server (NTRS)
Hogan, Jean M.
1995-01-01
This report is a document tree of the safety and integration documents required to develop a space experiment. Pertinent document information for each of the top level (tier one) safety and integration documents, and their applicable and reference (tier two) documents has been identified. This information includes: document title, revision level, configuration management, electronic availability, listed applicable and reference documents, source for obtaining the document, and document owner. One of the main conclusions of this report is that no single document tree exists for all safety and integration documents, regardless of the Shuttle carrier. This document also identifies the need for a single point of contact for customers wishing to access documents. The data in this report serves as a valuable information source for the NASA Lewis Research Center Project Documentation Center, as well as for all developers of space experiments.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.
2016-01-01
Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Development of a Multi-Point Microwave Interferometry (MPMI) Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, Paul Elliott; Cooper, Marcia A.; Jilek, Brook Anton
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of themore » MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Hechang; Petrovic, C.
The critical current density Jabc of K xFe 2-ySe₂ single crystals can be enhanced by more than one order of magnitude, up to ~2.1×10⁴ A/cm² by the post annealing and quenching technique. A scaling analysis reveals the universal behavior of the normalized pinning force as a function of the reduced field for all temperatures, indicating the presence of a single vortex pinning mechanism. The main pinning sources are three-dimensional (3D) point-like normal cores. The dominant vortex interaction with pinning centers is via spatial variations in critical temperature T c (“δT c pinning”).
Binary Paths to Type Ia Supernovae Explosions: the Highlights
NASA Astrophysics Data System (ADS)
Ferrario, Lilia
2013-01-01
This symposium was focused on the hunt for the progenitors of Type Ia supernovae (SNe Ia). Is there a main channel for the production of SNe Ia? If so, are these elusive progenitors single degenerate or double degenerate systems? Although most participants seemed to favor the single degenerate channel, there was no general agreement on the type of binary system at play. An observational puzzle that was highlighted was the apparent paucity of supersoft sources in our Galaxy and also in external galaxies. The single degenerate channel (and as it was pointed out, quite possibly also the double degenerate channel) requires the binary system to pass through a phase of steady nuclear burning. However, the observed number of supersoft sources falls short by a factor of up to 100 in explaining the estimated birth rates of SNe Ia. Thus, are these supersoft sources somehow hidden away and radiating at different wavelengths, or are we missing some important pieces of this puzzle that may lead to the elimination of a certain class of progenitor? Another unanswered question concerns the dependence of SNe Ia luminosities on the age of their host galaxy. Several hypotheses were put forward, but none was singled out as the most likely explanation. It is fair to say that at the end of the symposium the definitive answer to the vexed progenitor question remained well and truly wide open.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mok Tsze Chung, E; Aleman, D; Safigholi, H
Purpose: The effectiveness of using a combination of three sources, {sup 60}Co, {sup 192}Ir and {sup 169}Yb, is analyzed. Different combinations are compared against a single {sup 192}Ir source on prostate cancer cases. A novel inverse planning interior point algorithm is developed in-house to generate the treatment plans. Methods: Thirteen prostate cancer patients are separated into two groups: Group A includes eight patients with the prostate as target volume, while group B consists of four patients with a boost nodule inside the prostate that is assigned 150% of the prescription dose. The mean target volume is 35.7±9.3cc and 30.6±8.5cc formore » groups A and B, respectively. All patients are treated with each source individually, then with paired sources, and finally with all three sources. To compare the results, boost volume V150 and D90, urethra Dmax and D10, and rectum Dmax and V80 are evaluated. For fair comparison, all plans are normalized to a uniform V100=100. Results: Overall, double- and triple-source plans were better than single-source plans. The triple-source plans resulted in an average decrease of 21.7% and 1.5% in urethra Dmax and D10, respectively, and 8.0% and 0.8% in rectum Dmax and V80, respectively, for group A. For group B, boost volume V150 and D90 increased by 4.7% and 3.0%, respectively, while keeping similar dose delivered to the urethra and rectum. {sup 60}Co and {sup 192}Ir produced better plans than their counterparts in the double-source category, whereas {sup 60}Co produced more favorable results than the remaining individual sources. Conclusion: This study demonstrates the potential advantage of using a combination of two or three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. Our results show that {sup 60}Co, {sup 192}Ir and {sup 169}Yb produce the best plans when used simultaneously and can thus be an alternative to {sup 192}Ir-only in high-dose-rate prostate brachytherapy.« less
NASA Astrophysics Data System (ADS)
Singh, Yashi; Hussain, Ikhlaq; Singh, Bhim; Mishra, Sukumar
2018-06-01
In this paper, power quality features such as harmonics mitigation, power factor correction with active power filtering are addressed in a single-stage, single-phase solar photovoltaic (PV) grid tied system. The Power Balance Theory (PBT) with perturb and observe based maximum power point tracking algorithm is proposed for the mitigation of power quality problems in a solar PV grid tied system. The solar PV array is interfaced to a single phase AC grid through a Voltage Source Converter (VSC), which provides active power flow from a solar PV array to the grid as well as to the load and it performs harmonics mitigation using PBT based control. The solar PV array power varies with sunlight and due to this, the solar PV grid tied VSC works only 8-10 h per day. At night, when PV power is zero, the VSC works as an active power filter for power quality improvement, and the load active power is delivered by the grid to the load connected at the point of common coupling. This increases the effective utilization of a VSC. The system is modelled and simulated using MATLAB and simulated responses of the system at nonlinear loads and varying environmental conditions are also validated experimentally on a prototype developed in the laboratory.
Quasi-Solid-State Single-Atom Transistors.
Xie, Fangqing; Peukert, Andreas; Bender, Thorsten; Obermair, Christian; Wertz, Florian; Schmieder, Philipp; Schimmel, Thomas
2018-06-21
The single-atom transistor represents a quantum electronic device at room temperature, allowing the switching of an electric current by the controlled and reversible relocation of one single atom within a metallic quantum point contact. So far, the device operates by applying a small voltage to a control electrode or "gate" within the aqueous electrolyte. Here, the operation of the atomic device in the quasi-solid state is demonstrated. Gelation of pyrogenic silica transforms the electrolyte into the quasi-solid state, exhibiting the cohesive properties of a solid and the diffusive properties of a liquid, preventing the leakage problem and avoiding the handling of a liquid system. The electrolyte is characterized by cyclic voltammetry, conductivity measurements, and rotation viscometry. Thus, a first demonstration of the single-atom transistor operating in the quasi-solid-state is given. The silver single-atom and atomic-scale transistors in the quasi-solid-state allow bistable switching between zero and quantized conductance levels, which are integer multiples of the conductance quantum G 0 = 2e 2 /h. Source-drain currents ranging from 1 to 8 µA are applied in these experiments. Any obvious influence of the gelation of the aqueous electrolyte on the electron transport within the quantum point contact is not observed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Singh, Yashi; Hussain, Ikhlaq; Singh, Bhim; Mishra, Sukumar
2018-03-01
In this paper, power quality features such as harmonics mitigation, power factor correction with active power filtering are addressed in a single-stage, single-phase solar photovoltaic (PV) grid tied system. The Power Balance Theory (PBT) with perturb and observe based maximum power point tracking algorithm is proposed for the mitigation of power quality problems in a solar PV grid tied system. The solar PV array is interfaced to a single phase AC grid through a Voltage Source Converter (VSC), which provides active power flow from a solar PV array to the grid as well as to the load and it performs harmonics mitigation using PBT based control. The solar PV array power varies with sunlight and due to this, the solar PV grid tied VSC works only 8-10 h per day. At night, when PV power is zero, the VSC works as an active power filter for power quality improvement, and the load active power is delivered by the grid to the load connected at the point of common coupling. This increases the effective utilization of a VSC. The system is modelled and simulated using MATLAB and simulated responses of the system at nonlinear loads and varying environmental conditions are also validated experimentally on a prototype developed in the laboratory.
Propeller noise caused by blade tip radial forces
NASA Technical Reports Server (NTRS)
Hanson, D. B.
1986-01-01
New experimental evidence which indicates the presence of leading edge and tip edge vortex flow on Prop-Fans is examined, and performance and noise consequences are addressed. It was shown that the tip edge vortex is a significant noise source, particularly for unswept Prop-Fan blades. Preliminary calculations revealed that the addition of the tip side edge source to single rotation Prop-Fans during take off conditions improved the agreement between experiment and theory at blade passing frequency. At high-speed conditions such as the Prop-Fan cruise point, the tip loading effect tends to cancel thickness noise.
NASA Astrophysics Data System (ADS)
Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.
2011-09-01
Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.
NASA Astrophysics Data System (ADS)
Ihsani, Alvin; Farncombe, Troy
2016-02-01
The modelling of the projection operator in tomographic imaging is of critical importance especially when working with algebraic methods of image reconstruction. This paper proposes a distance-driven projection method which is targeted to single-pinhole single-photon emission computed tomograghy (SPECT) imaging since it accounts for the finite size of the pinhole, and the possible tilting of the detector surface in addition to other collimator-specific factors such as geometric sensitivity. The accuracy and execution time of the proposed method is evaluated by comparing to a ray-driven approach where the pinhole is sub-sampled with various sampling schemes. A point-source phantom whose projections were generated using OpenGATE was first used to compare the resolution of reconstructed images with each method using the full width at half maximum (FWHM). Furthermore, a high-activity Mini Deluxe Phantom (Data Spectrum Corp., Durham, NC, USA) SPECT resolution phantom was scanned using a Gamma Medica X-SPECT system and the signal-to-noise ratio (SNR) and structural similarity of reconstructed images was compared at various projection counts. Based on the reconstructed point-source phantom, the proposed distance-driven approach results in a lower FWHM than the ray-driven approach even when using a smaller detector resolution. Furthermore, based on the Mini Deluxe Phantom, it is shown that the distance-driven approach has consistently higher SNR and structural similarity compared to the ray-driven approach as the counts in measured projections deteriorates.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...
2016-04-13
Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less
Near-field transport of {sup 129}I from a point source in an in-room disposal vault
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolar, M.; Leneveu, D.M.; Johnson, L.H.
1995-12-31
A very small number of disposal containers of heat generating nuclear waste may have initial manufacturing defects that would lead to pin-hole type failures at the time of or shortly after emplacement. For sufficiently long-lived containers, only the initial defects need to be considered in modeling of release rates from the disposal vault. Two approaches to modeling of near-field mass transport from a single point source within a disposal room have been compared: the finite-element code MOTIF (A Model Of Transport In Fractured/porous media) and a boundary integral method (BIM). These two approaches were found to give identical results formore » a simplified model of the disposal room without groundwater flow. MOTIF has then been used to study the effects of groundwater flow on the mass transport out of the emplacement room.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nitao, J J
The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less
Investigation of L-band shipboard antennas for maritime satellite applications
NASA Technical Reports Server (NTRS)
Heckert, G. P.
1972-01-01
A basic conceptual investigation of low cost L-band antenna subsystems for shipboard use was conducted by identifying the various pertinent design trade-offs and related performance characteristics peculiar to the civilian maritime application, and by comparing alternate approaches for their simplicity and general suitability. The study was not directed at a single specific proposal, but was intended to be parametric in nature. Antenna system concepts were to be investigated for a range of gain of 3 to 18 dB, with a value of about 10 dB considered as a baseline reference. As the primary source of potential complexity in shipboard antennas, which have beamwidths less than hemispherical as the beam pointing or selecting mechanism, major emphasis was directed at this aspect. Three categories of antenna system concepts were identified: (1) mechanically pointed, single-beam antennas; (2) fixed antennas with switched-beams; and (3) electronically-steered phased arrays. It is recommended that an L-band short backfire antenna subsystem, including a two-axis motor driven gimbal mount, and necessary single channel monopulse tracking receiver portions be developed for demonstration of performance and subsystem simplicity.
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André
2018-03-01
There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.
Real-time volcano monitoring using GNSS single-frequency receivers
NASA Astrophysics Data System (ADS)
Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.
2015-12-01
We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.
An Array of Optical Receivers for Deep-Space Communications
NASA Technical Reports Server (NTRS)
Vilnrotter, Chi-Wung; Srinivasan, Meera; Andrews, Kenneth
2007-01-01
An array of small optical receivers is proposed as an alternative to a single large optical receiver for high-data-rate communications in NASA s Deep Space Network (DSN). Because the telescope for a single receiver capable of satisfying DSN requirements must be greater than 10 m in diameter, the design, building, and testing of the telescope would be very difficult and expensive. The proposed array would utilize commercially available telescopes of 1-m or smaller diameter and, therefore, could be developed and verified with considerably less difficulty and expense. The essential difference between a single-aperture optical-communications receiver and an optical-array receiver is that a single-aperture receiver focuses all of the light energy it collects onto the surface of an optical detector, whereas an array receiver focuses portions of the total collected energy onto separate detectors, optically detects each fractional energy component, then combines the electrical signal from the array of detector outputs to form the observable, or "decision statistic," used to decode the transmitted data. A conceptual block diagram identifying the key components of the optical-array receiver suitable for deep-space telemetry reception is shown in the figure. The most conspicuous feature of the receiver is the large number of small- to medium-size telescopes, with individual apertures and number of telescopes selected to make up the desired total collecting area. This array of telescopes is envisioned to be fully computer- controlled via the user interface and prediction-driven to achieve rough pointing and tracking of the desired spacecraft. Fine-pointing and tracking functions then take over to keep each telescope pointed toward the source, despite imperfect pointing predictions, telescope-drive errors, and vibration caused by wind.
Ryvolová, Markéta; Preisler, Jan; Foret, Frantisek; Hauser, Peter C; Krásenský, Pavel; Paull, Brett; Macka, Mirek
2010-01-01
This work for the first time combines three on-capillary detection methods, namely, capacitively coupled contactless conductometric (C(4)D), photometric (PD), and fluorimetric (FD), in a single (identical) point of detection cell, allowing concurrent measurements at a single point of detection for use in capillary electrophoresis, capillary electrochromatography, and capillary/nanoliquid chromatography. The novel design is based on a standard 6.3 mm i.d. fiber-optic SMA adapter with a drilled opening for the separation capillary to go through, to which two concentrically positioned C(4)D detection electrodes with a detection gap of 7 mm were added on each side acting simultaneously as capillary guides. The optical fibers in the SMA adapter were used for the photometric signal (absorbance), and another optical fiber at a 45 degrees angle to the capillary was applied to collect the emitted light for FD. Light emitting diodes (255 and 470 nm) were used as light sources for the PD and FD detection modes. LOD values were determined under flow-injection conditions to exclude any stacking effects: For the 470 nm LED limits of detection (LODs) for FD and PD were for fluorescein (1 x 10(-8) mol/L) and tartrazine (6 x 10(-6) mol/L), respectively, and the LOD for the C(4)D was for magnesium chloride (5 x 10(-7) mol/L). The advantage of the three different detection signals in a single point is demonstrated in capillary electrophoresis using model mixtures and samples including a mixture of fluorescent and nonfluorescent dyes and common ions, underivatized amino acids, and a fluorescently labeled digest of bovine serum albumin.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
NASA Astrophysics Data System (ADS)
Weng, Jiawen; Clark, David C.; Kim, Myung K.
2016-05-01
A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.
Bru, Juan; Berger, Christopher A
2012-01-01
Background Point-of-care electronic medical records (EMRs) are a key tool to manage chronic illness. Several EMRs have been developed for use in treating HIV and tuberculosis, but their applicability to primary care, technical requirements and clinical functionalities are largely unknown. Objectives This study aimed to address the needs of clinicians from resource-limited settings without reliable internet access who are considering adopting an open-source EMR. Study eligibility criteria Open-source point-of-care EMRs suitable for use in areas without reliable internet access. Study appraisal and synthesis methods The authors conducted a comprehensive search of all open-source EMRs suitable for sites without reliable internet access. The authors surveyed clinician users and technical implementers from a single site and technical developers of each software product. The authors evaluated availability, cost and technical requirements. Results The hardware and software for all six systems is easily available, but they vary considerably in proprietary components, installation requirements and customisability. Limitations This study relied solely on self-report from informants who developed and who actively use the included products. Conclusions and implications of key findings Clinical functionalities vary greatly among the systems, and none of the systems yet meet minimum requirements for effective implementation in a primary care resource-limited setting. The safe prescribing of medications is a particular concern with current tools. The dearth of fully functional EMR systems indicates a need for a greater emphasis by global funding agencies to move beyond disease-specific EMR systems and develop a universal open-source health informatics platform. PMID:22763661
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot
2014-03-01
The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.
NASA Astrophysics Data System (ADS)
Hanna, S. J.; Campuzano-Jost, P.; Simpson, E. A.; Robb, D. B.; Burak, I.; Blades, M. W.; Hepburn, J. W.; Bertram, A. K.
2009-01-01
A laser based vacuum ultraviolet (VUV) light source using resonance enhanced four wave difference mixing in xenon gas was developed for near threshold ionization of organics in atmospheric aerosol particles. The source delivers high intensity pulses of VUV light (in the range of 1010 to 1013 photons/pulse depending on wavelength, 5 ns FWHM) with a continuously tunable wavelength from 122 nm (10.2 eV) to 168 nm (7.4 eV)E The setup allows for tight (<1 mm2) and precise focusing ([mu]rad pointing angle adjustability), attributes required for single particle detection. The generated VUV is separated from the pump wavelengths by a custom monochromator which ensures high spectral purity and minimizes absorptive losses. The performance of the source was characterized using organic molecules in the gas phase and optimal working conditions are reported. In the gas phase measurements, photoionization efficiency (PIE) curves were collected for seven different organic species with ionization energies spanning the full wavelength range of the VUV source. The measured appearance energies are very close to the literature values of the ionization energies for all seven species. The effectiveness of the source for single particle studies was demonstrated by analysis of individual caffeine aerosols vaporized by a pulsed CO2 laser in an ion trap mass spectrometer. Mass spectra from single particles down to 300 nm in diameter were collected. Excellent signal to noise characteristics for these small particles give a caffeine detection limit of 8 × 105 molecules which is equivalent to a single 75 nm aerosol, or approximately 1.5% of a 300 nm particleE The appearance energy of caffeine originating from the aerosol was also measured and found to be 7.91 ± 0.05 eV, in good agreement with literature values.
Non-line-of-sight ultraviolet link loss in noncoplanar geometry.
Wang, Leijie; Xu, Zhengyuan; Sadler, Brian M
2010-04-15
Various path loss models have been developed for solar blind non-line-of-sight UV communication links under an assumption of coplanar source beam axis and receiver pointing direction. This work further extends an existing single-scattering coplanar analytical model to noncoplanar geometry. The model is derived as a function of geometric parameters and atmospheric characteristics. Its behavior is numerically studied in different noncoplanar geometric settings.
Numerical Analysis of the Acoustic Field of Tip-Clearance Flow
NASA Astrophysics Data System (ADS)
Alavi Moghadam, S. M.; M. Meinke Team; W. Schröder Team
2015-11-01
Numerical simulations of the acoustic field generated by a shrouded axial fan are studied by a hybrid fluid-dynamics-acoustics method. In a first step, large-eddy simulations are performed to investigate the dynamics of tip clearance flow for various tip gap sizes and to determine the acoustic sources. The simulations are performed for a single blade out of five blades with periodic boundary conditions in the circumferential direction on a multi-block structured mesh with 1.4 ×108 grid points. The turbulent flow is simulated at a Reynolds number of 9.36 ×105 at undisturbed inflow condition and the results are compared with experimental data. The diameter and strength of the tip vortex increase with the tip gap size, while simultaneously the efficiency of the fan decreases. In a second step, the acoustic field on the near field is determined by solving the acoustic perturbation equations (APE) on a mesh for a single blade consisting of approx. 9.8 ×108 grid points. The overall agreement of the pressure spectrum and its directivity with measurements confirm the correct identification of the sound sources and accurate prediction of the acoustic duct propagation. The results show that the longer the tip gap size the higher the broadband noise level. Senior Scientist, Institute of Aerodynamics, RWTH Aachen University.
Health Information Research Platform (HIReP)--an architecture pattern.
Schreiweis, Björn; Schneider, Gerd; Eichner, Theresia; Bergh, Björn; Heinze, Oliver
2014-01-01
Secondary use or single source is still far from routine in healthcare, although lots of data are available either structured or unstructured. As data are stored in multiple systems, using them for biomedical research is difficult. Clinical data warehouses already help overcoming this issue, but currently they are only used for certain parts of biomedical research. A comprehensive research platform based on a generic architecture pattern could increase the benefits of existing data warehouses for both patient care and research by meeting two objectives: serving as a so called single point-of-truth and acting as a mediator between them strengthening interaction and close collaboration. Another effect is to reduce boundaries for the implementation of data warehouses. Taking further settings into account the architecture of a clinical data warehouse supporting patient care and biomedical research needs to be integrated with biomaterial banks and other sources. This work provides a solution conceptualizing a comprehensive architecture pattern of a Health Information Research Platform (HIReP) derived from use cases of the patient care and biomedical research domain. It serves as single IT infrastructure providing solutions for any type of use case.
Hansman, Jan; Mrdja, Dusan; Slivka, Jaroslav; Krmar, Miodrag; Bikit, Istvan
2015-05-01
The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9in.x9in. NaI well-detector with 3in. thickness and a 3in.×3in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5-50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130-4000Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Long-Term Stability of the NIST Standard Ultrasonic Source.
Fick, Steven E
2008-01-01
The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST."
Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao
2018-06-05
Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.
Rule, Michael E.; Vargas-Irwin, Carlos E.; Donoghue, John P.
2017-01-01
Determining the relationship between single-neuron spiking and transient (20 Hz) β-local field potential (β-LFP) oscillations is an important step for understanding the role of these oscillations in motor cortex. We show that whereas motor cortex firing rates and beta spiking rhythmicity remain sustained during steady-state movement preparation periods, β-LFP oscillations emerge, in contrast, as short transient events. Single-neuron mean firing rates within and outside transient β-LFP events showed no differences, and no consistent correlation was found between the beta oscillation amplitude and firing rates, as was the case for movement- and visual cue-related β-LFP suppression. Importantly, well-isolated single units featuring beta-rhythmic spiking (43%, 125/292) showed no apparent or only weak phase coupling with the transient β-LFP oscillations. Similar results were obtained for the population spiking. These findings were common in triple microelectrode array recordings from primary motor (M1), ventral (PMv), and dorsal premotor (PMd) cortices in nonhuman primates during movement preparation. Although beta spiking rhythmicity indicates strong membrane potential fluctuations in the beta band, it does not imply strong phase coupling with β-LFP oscillations. The observed dissociation points to two different sources of variation in motor cortex β-LFPs: one that impacts single-neuron spiking dynamics and another related to the generation of mesoscopic β-LFP signals. Furthermore, our findings indicate that rhythmic spiking and diverse neuronal firing rates, which encode planned actions during movement preparation, may naturally limit the ability of different neuronal populations to strongly phase-couple to a single dominant oscillation frequency, leading to the observed spiking and β-LFP dissociation. NEW & NOTEWORTHY We show that whereas motor cortex spiking rates and beta (~20 Hz) spiking rhythmicity remain sustained during steady-state movement preparation periods, β-local field potential (β-LFP) oscillations emerge, in contrast, as transient events. Furthermore, the β-LFP phase at which neurons spike drifts: phase coupling is typically weak or absent. This dissociation points to two sources of variation in the level of motor cortex beta: one that impacts single-neuron spiking and another related to the generation of measured mesoscopic β-LFPs. PMID:28100654
Waite, G.P.; Chouet, B.A.; Dawson, P.B.
2008-01-01
The current eruption at Mount St. Helens is characterized by dome building and shallow, repetitive, long-period (LP) earthquakes. Waveform cross-correlation reveals remarkable similarity for a majority of the earthquakes over periods of several weeks. Stacked spectra of these events display multiple peaks between 0.5 and 2 Hz that are common to most stations. Lower-amplitude very-long-period (VLP) events commonly accompany the LP events. We model the source mechanisms of LP and VLP events in the 0.5-4 s and 8-40 s bands, respectively, using data recorded in July 2005 with a 19-station temporary broadband network. The source mechanism of the LP events includes: 1) a volumetric component modeled as resonance of a gently NNW-dipping, steam-filled crack located directly beneath the actively extruding part of the new dome and within 100 m of the crater floor and 2) a vertical single force attributed to movement of the overlying dome. The VLP source, which also includes volumetric and single-force components, is 250 m deeper and NNW of the LP source, at the SW edge of the 1980s lava dome. The volumetric component points to the compression and expansion of a shallow, magma-filled sill, which is subparallel to the hydrothermal crack imaged at the LP source, coupled with a smaller component of expansion and compression of a dike. The single-force components are due to mass advection in the magma conduit. The location, geometry and timing of the sources suggest the VLP and LP events are caused by perturbations of a common crack system.
Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.
2017-03-01
Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.
Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam
2016-01-01
Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.
High Sensitivity, High Angular Resolution Far-infrared Photometry from the KAO
NASA Technical Reports Server (NTRS)
Lester, D.; Harvey, P. M.; Wilking, B. A.; Joy, M.
1984-01-01
Most of the luminosity of embedded sources is reemitted in the far-infrared continuum. Measurements in the far-infrared are essential to understand the energetics of the interstellar medium, and of star formation regions in particular. Measurements from the KAO, are made in diffraction limited beams that sample a spatial scale considerably smaller than that given by IRAS. The KAO instrument technology has matured to the point that the single scan limiting flux of IRAS at 100 micro can be reached in a diffraction limited beam in a single typical KAO observing leg. The far-infrared photometer system and selections of recent observations are presented.
Multi-INT fusion to support port and harbor security and general maritime awareness
NASA Astrophysics Data System (ADS)
Von Kahle, Louis; Alexander, Robert
2006-05-01
The international community's focus on deterring terrorism has identified many vulnerabilities to a country's borders. These vulnerabilities include not only airports and rail lines but also the ports, harbors and miles of coastline which many countries must protect. In seeking to address this challenge, many technologies, processes and procedures have been identified that utilize single point or single source INT's (i.e., sources of intelligence - signals: SIGINT, imagery: IMINT, and open-source: INTERNET). These single source data sets include the information gleaned from shipping lines, port arrival and departure information and information from shipboard based electronic systems like the Automatic Identification System (AIS). Typically these are evaluated and incorporated into products or decisions in a singular manner and not with any reference or relationship to each other. In this work, an identification and analysis of these data sets will be performed in order to determine: •Any commonality between these data sets, •The ability to fuse information between these data sets, •The ability to determine relationships between these data sets, and •The ability to present any fused information or relationships in a timely manner In summary, the work served as a means for determining the data sets that were of the highest value and for determining the fusion method for producing a product of value. More work can be done to define the data sets that have the most commonality and thus will help to produce a fused product in the most timely and efficient manner.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
Large-angle illumination STEM: Toward three-dimensional atom-by-atom imaging
Ishikawa, Ryo; Lupini, Andrew R.; Hinuma, Yoyo; ...
2014-11-26
To completely understand and control materials and their properties, it is of critical importance to determine their atomic structures in all three dimensions. Recent revolutionary advances in electron optics – the inventions of geometric and chromatic aberration correctors as well as electron source monochromators – have provided fertile ground for performing optical depth sectioning at atomic-scale dimensions. In this study we theoretically demonstrate the imaging of top/sub-surface atomic structures and identify the depth of single dopants, single vacancies and the other point defects within materials by large-angle illumination scanning transmission electron microscopy (LAI-STEM). The proposed method also allows us tomore » measure specimen properties such as thickness or three-dimensional surface morphology using observations from a single crystallographic orientation.« less
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
NASA Astrophysics Data System (ADS)
Cao, Liji; Peter, Jörg
2013-06-01
The adoption of axially oriented line illumination patterns for fluorescence excitation in small animals for fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is being investigated. A trimodal single-photon-emission-computed-tomography/computed-tomography/optical-tomography (SPECT-CT-OT) small animal imaging system is being modified for employment of point- and line-laser excitation sources. These sources can be arbitrarily positioned around the imaged object. The line source is set to illuminate the object along its entire axial direction. Comparative evaluation of point and line illumination patterns for FSI and FOT is provided involving phantom as well as mouse data. Given the trimodal setup, CT data are used to guide the optical approaches by providing boundary information. Furthermore, FOT results are also being compared to SPECT. Results show that line-laser illumination yields a larger axial field of view (FOV) in FSI mode, hence faster data acquisition, and practically acceptable FOT reconstruction throughout the whole animal. Also, superimposed SPECT and FOT data provide additional information on similarities as well as differences in the distribution and uptake of both probe types. Fused CT data enhance further the anatomical localization of the tracer distribution in vivo. The feasibility of line-laser excitation for three-dimensional fluorescence imaging and tomography is demonstrated for initiating further research, however, not with the intention to replace one by the other.
Quality assessment of MEG-to-MRI coregistrations
NASA Astrophysics Data System (ADS)
Sonntag, Hermann; Haueisen, Jens; Maess, Burkhard
2018-04-01
For high precision in source reconstruction of magnetoencephalography (MEG) or electroencephalography data, high accuracy of the coregistration of sources and sensors is mandatory. Usually, the source space is derived from magnetic resonance imaging (MRI). In most cases, however, no quality assessment is reported for sensor-to-MRI coregistrations. If any, typically root mean squares (RMS) of point residuals are provided. It has been shown, however, that RMS of residuals do not correlate with coregistration errors. We suggest using target registration error (TRE) as criterion for the quality of sensor-to-MRI coregistrations. TRE measures the effect of uncertainty in coregistrations at all points of interest. In total, 5544 data sets with sensor-to-head and 128 head-to-MRI coregistrations, from a single MEG laboratory, were analyzed. An adaptive Metropolis algorithm was used to estimate the optimal coregistration and to sample the coregistration parameters (rotation and translation). We found an average TRE between 1.3 and 2.3 mm at the head surface. Further, we observed a mean absolute difference in coregistration parameters between the Metropolis and iterative closest point algorithm of (1.9 +/- 15){\\hspace{0pt}}\\circ and (1.1 +/- 9) m. A paired sample t-test indicated a significant improvement in goal function minimization by using the Metropolis algorithm. The sampled parameters allowed computation of TRE on the entire grid of the MRI volume. Hence, we recommend the Metropolis algorithm for head-to-MRI coregistrations.
NASA Technical Reports Server (NTRS)
Greenhalgh, Phillip O.
2004-01-01
In the production of each Space Shuttle Reusable Solid Rocket Motor (RSRM), over 100,000 inspections are performed. ATK Thiokol Inc. reviewed these inspections to ensure a robust inspection system is maintained. The principal effort within this endeavor was the systematic identification and evaluation of inspections considered to be single-point. Single-point inspections are those accomplished on components, materials, and tooling by only one person, involving no other check. The purpose was to more accurately characterize risk and ultimately address and/or mitigate risk associated with single-point inspections. After the initial review of all inspections and identification/assessment of single-point inspections, review teams applied risk prioritization methodology similar to that used in a Process Failure Modes Effects Analysis to derive a Risk Prioritization Number for each single-point inspection. After the prioritization of risk, all single-point inspection points determined to have significant risk were provided either with risk-mitigating actions or rationale for acceptance. This effort gave confidence to the RSRM program that the correct inspections are being accomplished, that there is appropriate justification for those that remain as single-point inspections, and that risk mitigation was applied to further reduce risk of higher risk single-point inspections. This paper examines the process, results, and lessons learned in identifying, assessing, and mitigating risk associated with single-point inspections accomplished in the production of the Space Shuttle RSRM.
NASA Astrophysics Data System (ADS)
Townsley, Leisa K.; Broos, Patrick S.; Feigelson, Eric D.; Garmire, Gordon P.; Getman, Konstantin V.
2006-04-01
We have studied the X-ray point-source population of the 30 Doradus (30 Dor) star-forming complex in the Large Magellanic Cloud using high spatial resolution X-ray images and spatially resolved spectra obtained with the Advanced CCD Imaging Spectrometer (ACIS) on board the Chandra X-Ray Observatory. Here we describe the X-ray sources in a 17'×17' field centered on R136, the massive star cluster at the center of the main 30 Dor nebula. We detect 20 of the 32 Wolf-Rayet stars in the ACIS field. The cluster R136 is resolved at the subarcsecond level into almost 100 X-ray sources, including many typical O3-O5 stars, as well as a few bright X-ray sources previously reported. Over 2 orders of magnitude of scatter in LX is seen among R136 O stars, suggesting that X-ray emission in the most massive stars depends critically on the details of wind properties and the binarity of each system, rather than reflecting the widely reported characteristic value LX/Lbol~=10-7. Such a canonical ratio may exist for single massive stars in R136, but our data are too shallow to confirm this relationship. Through this and future X-ray studies of 30 Dor, the complete life cycle of a massive stellar cluster can be revealed.
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shwetha, Bondel; Ravikumar, Manickam, E-mail: drravikumarm@gmail.com; Supe, Sanjay S.
2012-04-01
Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder,more » and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.« less
Xrt And Shinx Joint Flare Study: Ar 11024
NASA Astrophysics Data System (ADS)
Engell, Alexander; Sylwester, J.; Siarkowski, M.
2010-05-01
From 12:00 UT on July 3 through July 7, 2009 SphinX (Solar Photometer IN X-rays) observes 130 flares with active region (AR) 11024 being the only AR on disk. XRT (X-Ray Telescope) is able to observe 64 of these flare events. The combination of both instruments results in a flare study revealing (1) a relationship between flux emergence and flare rate, (2) that the presence of active region loops typically results in different flare morphologies (single and multiple loop flares) then when there is a lack of an active region loop environment where more cusp and point-like flares are observed, (3) cusp and point-like flares often originate from the same location, and (4) a distribution of flare temperatures corresponding to the different flare morphologies. The differences between the observed flare morphologies may occur as the result of the heated plasma through the flaring process being confined by the proximity of loop structures as for the single and multiple loop flares, while for cusp and point-like flares they occur in an early-phase environment that lack loop presence. The continuing flux emergence of AR 11024 likely provides different magnetic interactions and may be the source responsible for all of the flares.
Planar location of the simulative acoustic source based on fiber optic sensor array
NASA Astrophysics Data System (ADS)
Liang, Yi-Jun; Liu, Jun-feng; Zhang, Qiao-ping; Mu, Lin-lin
2010-06-01
A fiber optic sensor array which is structured by four Sagnac fiber optic sensors is proposed to detect and locate a simulative source of acoustic emission (AE). The sensing loops of Sagnac interferometer (SI) are regarded as point sensors as their small size. Based on the derived output light intensity expression of SI, the optimum work condition of the Sagnac fiber optic sensor is discussed through the simulation of MATLAB. Four sensors are respectively placed on a steel plate to structure the sensor array and the location algorithms are expatiated. When an impact is generated by an artificial AE source at any position of the plate, the AE signal will be detected by four sensors at different times. With the help of a single chip microcomputer (SCM) which can calculate the position of the AE source and display it on LED, we have implemented an intelligent detection and location.
Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.
Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G
2014-12-01
PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.
Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika
2014-10-01
We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12 μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4 cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.
A grid-connected single-phase photovoltaic micro inverter
NASA Astrophysics Data System (ADS)
Wen, X. Y.; Lin, P. J.; Chen, Z. C.; Wu, L. J.; Cheng, S. Y.
2017-11-01
In this paper, the topology of a single-phase grid-connected photovoltaic (PV) micro-inverter is proposed. The PV micro-inverter consists of DC-DC stage with high voltage gain boost and DC-AC conversion stage. In the first stage, we apply the active clamp circuit and two voltage multipliers to achieve soft switching technology and high voltage gain. In addition, the flower pollination algorithm (FPA) is employed for the maximum power point tracking (MPPT) in the PV module in this stage. The second stage cascades a H-bridge inverter and LCL filter. To feed high quality sinusoidal power into the grid, the software phase lock, outer voltage loop and inner current loop control method are adopted as the control strategy. The performance of the proposed topology is tested by Matlab/Simulink. A PV module with maximum power 300W and maximum power point voltage 40V is applied as the input source. The simulation results indicate that the proposed topology and the control strategy are feasible.
Innovative Near Real-Time Data Dissemination Tools Developed by the Space Weather Research Center
NASA Astrophysics Data System (ADS)
Mullinix, R.; Maddox, M. M.; Berrios, D.; Kuznetsova, M.; Pulkkinen, A.; Rastaetter, L.; Zheng, Y.
2012-12-01
Space weather affects virtually all of NASA's endeavors, from robotic missions to human exploration. Knowledge and prediction of space weather conditions are therefore essential to NASA operations. The diverse nature of currently available space environment measurements and modeling products compels the need for a single access point to such information. The Integrated Space Weather Analysis (iSWA) System provides this single point access along with the capability to collect and catalog a vast range of sources including both observational and model data. NASA Goddard Space Weather Research Center heavily utilizes the iSWA System daily for research, space weather model validation, and forecasting for NASA missions. iSWA provides the capabilities to view and analyze near real-time space weather data from any where in the world. This presentation will describe the technology behind the iSWA system and describe how to use the system for space weather research, forecasting, training, education, and sharing.
Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu
2008-01-01
Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.
Low, Dennis J.; Conger, Randall W.
2001-01-01
Between February 1996 and November 2000, geophysical logging was conducted in 27 open borehole wells in and adjacent to the Butz Landfill Superfund Site, Jackson Township, Monroe County, Pa., to determine casing depth and depths of water-producing zones, water-receiving zones, and zones of vertical borehole flow. The wells range in depth from 57 to 319 feet below land surface. The geophysical logging determined the placement of well screens and packers, which allow monitoring and sampling of water-bearing zones in the fractured bedrock so that the horizontal and vertical distribution of contaminated ground water migrating from known sources could be determined. Geophysical logging included collection of caliper, natural-gamma, single-point-resistance, fluid-resistivity, fluid-temperature, and video logs. Caliper and video logs were used to locate fractures, joints, and weathered zones. Inflections on single-point-resistance, fluid-temperature, and fluid-resistivity logs indicated possible water-bearing fractures, and heatpulse-flowmeter measurements verified these locations. Natural-gamma logs provided information on stratigraphy.
Quantum communications system with integrated photonic devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordholt, Jane E.; Peterson, Charles Glen; Newell, Raymond Thorson
Security is increased in quantum communication (QC) systems lacking a true single-photon laser source by encoding a transmitted optical signal with two or more decoy-states. A variable attenuator or amplitude modulator randomly imposes average photon values onto the optical signal based on data input and the predetermined decoy-states. By measuring and comparing photon distributions for a received QC signal, a single-photon transmittance is estimated. Fiber birefringence is compensated by applying polarization modulation. A transmitter can be configured to transmit in conjugate polarization bases whose states of polarization (SOPs) can be represented as equidistant points on a great circle on themore » Poincare sphere so that the received SOPs are mapped to equidistant points on a great circle and routed to corresponding detectors. Transmitters are implemented in quantum communication cards and can be assembled from micro-optical components, or transmitter components can be fabricated as part of a monolithic or hybrid chip-scale circuit.« less
High-energy neutrinos from FR0 radio galaxies?
NASA Astrophysics Data System (ADS)
Tavecchio, F.; Righi, C.; Capetti, A.; Grandi, P.; Ghisellini, G.
2018-04-01
The sources responsible for the emission of high-energy (≳100 TeV) neutrinos detected by IceCube are still unknown. Among the possible candidates, active galactic nuclei with relativistic jets are often examined, since the outflowing plasma seems to offer the ideal environment to accelerate the required parent high-energy cosmic rays. The non-detection of single-point sources or - almost equivalently - the absence, in the IceCube events, of multiplets originating from the same sky position - constrains the cosmic density and the neutrino output of these sources, pointing to a numerous population of faint sources. Here we explore the possibility that FR0 radio galaxies, the population of compact sources recently identified in large radio and optical surveys and representing the bulk of radio-loud AGN population, can represent suitable candidates for neutrino emission. Modelling the spectral energy distribution of an FR0 radio galaxy recently associated with a γ-ray source detected by the Large Area Telescope onboard Fermi, we derive the physical parameters of its jet, in particular the power carried by it. We consider the possible mechanisms of neutrino production, concluding that pγ reactions in the jet between protons and ambient radiation is too inefficient to sustain the required output. We propose an alternative scenario, in which protons, accelerated in the jet, escape from it and diffuse in the host galaxy, producing neutrinos as a result of pp scattering with the interstellar gas, in strict analogy with the processes taking place in star-forming galaxies.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
NASA Astrophysics Data System (ADS)
Bush, Craig R.
This dissertation presents a novel current source converter topology that is primarily intended for single-phase photovoltaic (PV) applications. In comparison with the existing PV inverter technology, the salient features of the proposed topology are: a) the low frequency (double of line frequency) ripple that is common to single-phase inverters is greatly reduced; b) the absence of low frequency ripple enables significantly reduced size pass components to achieve necessary DC-link stiffness and c) improved maximum power point tracking (MPPT) performance is readily achieved due to the tightened current ripple even with reduced-size passive components. The proposed topology does not utilize any electrolytic capacitors. Instead an inductor is used as the DC-link filter and reliable AC film capacitors are utilized for the filter and auxiliary capacitor. The proposed topology has a life expectancy on par with PV panels. The proposed modulation technique can be used for any current source inverter where an unbalanced three-phase operation is desires such as active filters and power controllers. The proposed topology is ready for the next phase of microgrid and power system controllers in that it accepts reactive power commands. This work presents the proposed topology and its working principle supported by with numerical verifications and hardware results. Conclusions and future work are also presented.
Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.
Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small imagemore » patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.« less
NASA Astrophysics Data System (ADS)
Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.
2011-07-01
Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.
Grabtchak, Serge; Palmer, Tyler J; Whelan, William M
2011-07-01
Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ∼43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.
NASA Astrophysics Data System (ADS)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; Lemon, Cameron A.; Auger, Matthew W.; Banerji, Manda; Hung, Johnathan M.; Koposov, Sergey E.; Lidman, Christopher E.; Reed, Sophie L.; Allam, Sahar; Benoit-Lévy, Aurélien; Bertin, Emmanuel; Brooks, David; Buckley-Geer, Elizabeth; Carnero Rosell, Aurelio; Carrasco Kind, Matias; Carretero, Jorge; Cunha, Carlos E.; da Costa, Luiz N.; Desai, Shantanu; Diehl, H. Thomas; Dietrich, Jörg P.; Evrard, August E.; Finley, David A.; Flaugher, Brenna; Fosalba, Pablo; Frieman, Josh; Gerdes, David W.; Goldstein, Daniel A.; Gruen, Daniel; Gruendl, Robert A.; Gutierrez, Gaston; Honscheid, Klaus; James, David J.; Kuehn, Kyler; Kuropatkin, Nikolay; Lima, Marcos; Lin, Huan; Maia, Marcio A. G.; Marshall, Jennifer L.; Martini, Paul; Melchior, Peter; Miquel, Ramon; Ogando, Ricardo; Plazas Malagón, Andrés; Reil, Kevin; Romer, Kathy; Sanchez, Eusebio; Santiago, Basilio; Scarpine, Vic; Sevilla-Noarbe, Ignacio; Soares-Santos, Marcelle; Sobreira, Flavia; Suchyta, Eric; Tarle, Gregory; Thomas, Daniel; Tucker, Douglas L.; Walker, Alistair R.
2017-03-01
We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift zs = 2.74 and image separation of 2.9 arcsec lensed by a foreground zl = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES), near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with IAB = 18.61 and IAB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θE ˜ 1.47 arcsec, enclosed mass Menc ˜ 4 × 1011 M⊙ and a time delay of ˜52 d. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.
The Investigation of the Impact of SO2 Emissions from the Hong Kong International Airport
NASA Astrophysics Data System (ADS)
Gray, J. P.; Lau, A. K.; Yuan, Z.
2009-12-01
A previous study of the emissions from Hong Kong’s International Airport (HKIA) utilized a semi-quantitative wind direction and speed technique and identified HKIA as a significant source of SO2 in the region. This study however was based on a single data point and the conclusions reached appeared to be inconsistent with accepted thinking regarding aircraft and airport emissions, prompting an in-depth look at airport emissions and their impact on neighbouring region. Varied modelling techniques, making use of a more complete dataset, were employed to ensure a more comprehensive and defensible result. A similar analysis technique and the same monitoring station used in the previous study (Tung Chung) were combined with three additional stations to provided coverage to reach more certain conclusions. While results at Tung Chung were similar to those in the previous study, information from the other three sensors pointed to a source further to the north in the direction of the Black Point Coal Power Station and other power plants further to the north in Mainland China. This conclusion was confirmed by use of the CALMET / CALPUFF model to reproduce emission plumes from major sources within the region on problem days. The modelled results clearly showed that, in the cases simulated, pollution events noted at Tung Chung were primarily influenced by emissions originating at Hong Kong’s and Mainland China’s power stations, and the impact from HKIA is small. This study reiterates the importance of proper identification of all major sources in wind receptor type studies.
NASA Astrophysics Data System (ADS)
Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.
2017-12-01
Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chofor, N; Poppe, B; Nebah, F
Purpose: In a brachytherapy photon field in water the fluence-averaged mean photon energy Em at the point of measurement correlates with the radiation quality correction factor kQ of a non water-equivalent detector. To support the experimental assessment of Em, we show that the normalized signal ratio NSR of a pair of radiation detectors, an unshielded silicon diode and a diamond detector can serve to measure quantity Em in a water phantom at a Ir-192 unit. Methods: Photon fluence spectra were computed in EGSnrc based on a detailed model of the GammaMed source. Factor kQ was calculated as the ratio ofmore » the detector's spectrum-weighted responses under calibration conditions at a 60Co unit and under brachytherapy conditions at various radial distances from the source. The NSR was investigated for a pair of a p-type unshielded silicon diode 60012 and a synthetic single crystal diamond detector 60019 (both PTW Freiburg). Each detector was positioned according to its effective point of measurement, with its axis facing the source. Lateral signal profiles were scanned under complete scatter conditions, and the NSR was determined as the quotient of the signal ratio under application conditions x and that at position r-ref = 1 cm. Results: The radiation quality correction factor kQ shows a close correlation with the mean photon energy Em. The NSR of the diode/diamond pair changes by a factor of two from 0–18 cm from the source, while Em drops from 350 to 150 keV. Theoretical and measured NSR profiles agree by ± 2 % for points within 5 cm from the source. Conclusion: In the presence of the close correlation between radiation quality correction factor kQ and photon mean energy Em, the NSR provides a practical means of assessing Em under clinical conditions. Precise detector positioning is the major challenge.« less
Hudson, Diane Brage; Campbell-Grossman, Christie; Kupzyk, Kevin A; Brown, Sara E; Yates, Bernice C; Hanna, Kathleen M
2016-01-01
The aims of this study are to describe for single, low-income, adolescent, African American new mothers how (1) primary sources of social support changed over time, (2) the level of social support (emotional, informational, tangible, and problematic) from these primary sources changed over time, and (3) social support from the primary supporter was associated with mothers' psychosocial well-being (self-esteem and loneliness) over time. A secondary analysis was conducted of data from a previous social support intervention study. The sample consisted of 35 single, low-income, adolescent (mean [SD] age, 18.3 [1.7] years), African American new mothers. Mothers completed social support, self-esteem, and loneliness instruments at 1 and 6 weeks and 3 and 6 months postpartum. Most mothers (64.7%) had changes in their primary social support provider during the first 6 months postpartum. The combination of the adolescent's mother and boyfriend provided the highest level of support, no matter the type, relative to any other source of support. At every time point, positive correlations were found between emotional support and self-esteem and between problematic support and loneliness. Single, low-income, African American, adolescent new mothers are at risk for not having a consistent source of support, which may lead to lower self-esteem and greater loneliness. Clinical nurse specialists could facilitate care guidelines for these new mothers to identify their sources of support at each home visit and advocate for the adolescent's mother and boyfriend to work together to provide support. Bolstering the mothers' natural sources of support can potentially improve self-esteem and reduce loneliness. Improvement in these sources of support could prevent a decline in the mothers' psychosocial well-being. Development and testing support interventions are advocated; findings could guide clinical nurse specialists in addressing these new mothers' needs.
Hudson, Diane Brage; Campbell-Grossman, Christie; Kupzyk, Kevin A.; Brown, Sara E; Yates, Bernice; Hanna, Kathleen M.
2016-01-01
Aims Aims of this study were to describe for single, low-income, adolescent, African American new mothers how (1) primary sources of social support changed over time; (2) the level of social support (emotional, informational, tangible, and problematic) from these primary sources changed over time; and (3) social support from the primary supporter was associated with mothers' psychosocial well-being (self-esteem and loneliness) over time. Design A secondary analysis was conducted of data from a previous social support intervention study. Sample The sample consisted of 35 single, low-income, adolescent (M age = 18.3 years; SD = 1.7), African American new mothers. Methods Mothers completed social support, self-esteem, and loneliness instruments at 1 and 6 weeks and 3 and 6 months postpartum. Results Most mothers (64.7%) had changes in their primary social support provider during the first 6 months postpartum. The combination of the adolescent's mother and boyfriend provided the highest level of support, no matter the type, relative to any other source of support. At every time point, positive correlations were found between emotional support and self-esteem and between problematic support and loneliness. Conclusion Single, low-income, African American, adolescent new mothers are at risk for not having a consistent source of support which may lead to lower self-esteem and greater loneliness. Implications Clinical nurse specialists could facilitate care guidelines for these new mothers to identify their sources of support at each home visit and advocate for the adolescent's mother and boyfriend to work together to provide support. Bolstering the mothers' natural sources of support can potentially improve self-esteem and reduce loneliness. Improvement in these sources of support could prevent a decline in the mothers' psychosocial well-being. Development and testing support interventions are advocated; findings could guide clinical nurse specialists in addressing these new mothers' needs. PMID:27055037
Sources of Wind Variability at a Single Station in Complex Terrain During Tropical Cyclone Passage
2013-12-01
Mesoscale Prediction System CPA Closest point of approach ET Extratropical transition FNMOC Fleet Numerical Meteorology and Oceanography Center...forecasts. However, 2 the TC forecast tracks and warnings they issue necessarily focus on the large-scale structure of the storm , and are not...winds at one station. Also, this technique is a storm - centered forecast and even if the grid spacing is on order of one kilometer, it is unlikely
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volker, Arno; Hunter, Alan
Anisotropic materials are being used increasingly in high performance industrial applications, particularly in the aeronautical and nuclear industries. Some important examples of these materials are composites, single-crystal and heavy-grained metals. Ultrasonic array imaging in these materials requires exact knowledge of the anisotropic material properties. Without this information, the images can be adversely affected, causing a reduction in defect detection and characterization performance. The imaging operation can be formulated in two consecutive and reciprocal focusing steps, i.e., focusing the sources and then focusing the receivers. Applying just one of these focusing steps yields an interesting intermediate domain. The resulting common focusmore » point gather (CFP-gather) can be interpreted to determine the propagation operator. After focusing the sources, the observed travel-time in the CFP-gather describes the propagation from the focus point to the receivers. If the correct propagation operator is used, the measured travel-times should be the same as the time-reversed focusing operator due to reciprocity. This makes it possible to iteratively update the focusing operator using the data only and allows the material to be imaged without explicit knowledge of the anisotropic material parameters. Furthermore, the determined propagation operator can also be used to invert for the anisotropic medium parameters. This paper details the proposed technique and demonstrates its use on simulated array data from a specimen of Inconel single-crystal alloy commonly used in the aeronautical and nuclear industries.« less
Electro-optic modulation of a laser at microwave frequencies for interferometric purposes
NASA Astrophysics Data System (ADS)
Specht, Paul E.; Jilek, Brook A.
2017-02-01
A multi-point microwave interferometer (MPMI) concept was previously proposed by the authors for spatially-resolved, non-invasive tracking of a shock, reaction, or detonation front in energetic media [P. Specht et al., AIP Conf. Proc. 1793, 160010 (2017).]. The advantage of the MPMI concept over current microwave interferometry techniques is its detection of Doppler shifted microwave signals through electro-optic (EO) modulation of a laser. Since EO modulation preserves spatial variations in the Doppler shift, collecting the EO modulated laser light into a fiber array for recording with an optical heterodyne interferometer yields spatially-resolved velocity information. This work demonstrates the underlying physical principle of the MPMI diagnostic: the monitoring of a microwave signal with nanosecond temporal resolution using an optical heterodyne interferometer. For this purpose, the MPMI concept was simplified to a single-point construction using two tunable 1550 nm lasers and a 35.2 GHz microwave source. A (110) ZnTe crystal imparted the microwave frequency onto a laser, which was combined with a reference laser for determination of the microwave frequency in an optical heterodyne interferometer. A single, characteristic frequency associated with the microwave source was identified in all experiments, providing a means to monitor a microwave signal on nanosecond time scales. Lastly, areas for improving the frequency resolution of this technique are discussed, focusing on increasing the phase-modulated signal strength.
Electro-optic modulation of a laser at microwave frequencies for interferometric purposes.
Specht, Paul E; Jilek, Brook A
2017-02-01
A multi-point microwave interferometer (MPMI) concept was previously proposed by the authors for spatially-resolved, non-invasive tracking of a shock, reaction, or detonation front in energetic media [P. Specht et al., AIP Conf. Proc. 1793, 160010 (2017).]. The advantage of the MPMI concept over current microwave interferometry techniques is its detection of Doppler shifted microwave signals through electro-optic (EO) modulation of a laser. Since EO modulation preserves spatial variations in the Doppler shift, collecting the EO modulated laser light into a fiber array for recording with an optical heterodyne interferometer yields spatially-resolved velocity information. This work demonstrates the underlying physical principle of the MPMI diagnostic: the monitoring of a microwave signal with nanosecond temporal resolution using an optical heterodyne interferometer. For this purpose, the MPMI concept was simplified to a single-point construction using two tunable 1550 nm lasers and a 35.2 GHz microwave source. A (110) ZnTe crystal imparted the microwave frequency onto a laser, which was combined with a reference laser for determination of the microwave frequency in an optical heterodyne interferometer. A single, characteristic frequency associated with the microwave source was identified in all experiments, providing a means to monitor a microwave signal on nanosecond time scales. Lastly, areas for improving the frequency resolution of this technique are discussed, focusing on increasing the phase-modulated signal strength.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J. W.
2012-04-01
Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.
NASA Astrophysics Data System (ADS)
Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.
2018-05-01
High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.
Ren, Kangning; Liang, Qionglin; Mu, Xuan; Luo, Guoan; Wang, Yiming
2009-03-07
A novel miniaturized, portable fluorescence detection system for capillary array electrophoresis (CAE) on a microfluidic chip was developed, consisting of a scanning light-emitting diode (LED) light source and a single point photoelectric sensor. Without charge coupled detector (CCD), lens, fibers and moving parts, the system was extremely simplified. Pulsed driving of the LED significantly increased the sensitivity, and greatly reduced the power consumption and photobleaching effect. The highly integrated system was robust and easy to use. All the advantages realized the concept of a portable micro-total analysis system (micro-TAS), which could work on a single universal serial bus (USB) port. Compared with traditional CAE detecting systems, the current system could scan the radial capillary array with high scanning rate. An 8-channel CAE of fluorescein isothiocyanate (FITC) labeled arginine (Arg) on chip was demonstrated with this system, resulting in a limit of detection (LOD) of 640 amol.
NASA Astrophysics Data System (ADS)
Tumanov, Sergiu
A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.
A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission
NASA Astrophysics Data System (ADS)
Chen, Xi
The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.
Long period seismic source characterization at Popocatépetl volcano, Mexico
Arciniega-Ceballos, Alejandra; Dawson, Phillip; Chouet, Bernard A.
2012-01-01
The seismicity of Popocatépetl is dominated by long-period and very-long period signals associated with hydrothermal processes and magmatic degassing. We model the source mechanism of repetitive long-period signals in the 0.4–2 s band from a 15-station broadband network by stacking long-period events with similar waveforms to improve the signal-to-noise ratio. The data are well fitted by a point source located within the summit crater ~250 m below the crater floor and ~200 m from the inferred magma conduit. The inferred source includes a volumetric component that can be modeled as resonance of a horizontal steam-filled crack and a vertical single force component. The long-period events are thought to be related to the interaction between the magmatic system and a perched hydrothermal system. Repetitive injection of fluid into the horizontal fracture and subsequent sudden discharge when a critical pressure threshold is met provides a non-destructive source process.
Open-source do-it-yourself multi-color fluorescence smartphone microscopy
Sung, Yulung; Campa, Fernando; Shih, Wei-Chuan
2017-01-01
Fluorescence microscopy is an important technique for cellular and microbiological investigations. Translating this technique onto a smartphone can enable particularly powerful applications such as on-site analysis, on-demand monitoring, and point-of-care diagnostics. Current fluorescence smartphone microscope setups require precise illumination and imaging alignment which altogether limit its broad adoption. We report a multi-color fluorescence smartphone microscope with a single contact lens-like add-on lens and slide-launched total-internal-reflection guided illumination for three common tasks in investigative fluorescence microscopy: autofluorescence, fluorescent stains, and immunofluorescence. The open-source, simple and cost-effective design has the potential for do-it-yourself fluorescence smartphone microscopy. PMID:29188104
Increasing the Complexity of the Illumination May Reduce Gloss Constancy
Wendt, Gunnar; Faul, Franz
2017-01-01
We examined in which way gradual changes in the geometric structure of the illumination affect the perceived glossiness of a surface. The test stimuli were computer-generated three-dimensional scenes with a single test object that was illuminated by three point light sources, whose relative positions in space were systematically varied. In the first experiment, the subjects were asked to adjust the microscale smoothness of a match object illuminated by a single light source such that it has the same perceived glossiness as the test stimulus. We found that small changes in the structure of the light field can induce dramatic changes in perceived glossiness and that this effect is modulated by the microscale smoothness of the test object. The results of a second experiment indicate that the degree of overlap of nearby highlights plays a major role in this effect: Whenever the degree of overlap in a group of highlights is so large that they perceptually merge into a single highlight, the glossiness of the surface is systematically underestimated. In addition, we examined the predictability of the smoothness settings by a linear model that is based on a set of four different global image statistics. PMID:29250308
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.
We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan
We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less
NASA Astrophysics Data System (ADS)
Lachapelle, G.; Cannon, M. E.; Qiu, W.; Varner, C.
1996-09-01
Aircraft single point position accuracy is assessed through a comparison of the single point coordinates with corresponding DGPS-derived coordinates. The platform utilized for this evaluation is a Naval Air Warfare Center P-3 Orion aircraft. Data was collected over a period of about 40 hours, spread over six days, off Florida's East Coast in July 94, using DGPS reference stations in Jacksonville, FL, and Warminster, PA. The analysis of results shows that the consistency between aircraft single point and DGPS coordinates obtained in single point positioning mode and DGPS mode is about 1 m (rms) in latitude and longitude, and 2 m (rms) in height, with instantaneous errors of up to a few metres due to the effect of the ionosphere on the single point L1 solutions.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
NASA Astrophysics Data System (ADS)
Digman, Michelle
Fluorescence fluctuation spectroscopy has evolved from single point detection of molecular diffusion to a family of microscopy imaging correlation tools (i.e. ICS, RICS, STICS, and kICS) useful in deriving spatial-temporal dynamics of proteins in living cells The advantage of the imaging techniques is the simultaneous measurement of all points in an image with a frame rate that is increasingly becoming faster with better sensitivity cameras and new microscopy modalities such as the sheet illumination technique. A new frontier in this area is now emerging towards a high level of mapping diffusion rates and protein dynamics in the 2 and 3 dimensions. In this talk, I will discuss the evolution of fluctuation analysis from the single point source to mapping diffusion in whole cells and the technology behind this technique. In particular, new methods of analysis exploit correlation of molecular fluctuations originating from measurement of fluctuation correlations at distant points (pair correlation analysis) and methods that exploit spatial averaging of fluctuations in small regions (iMSD). For example the pair correlation fluctuation (pCF) analyses done between adjacent pixels in all possible radial directions provide a window into anisotropic molecular diffusion. Similar to the connectivity atlas of neuronal connections from the MRI diffusion tensor imaging these new tools will be used to map the connectome of protein diffusion in living cells. For biological reaction-diffusion systems, live single cell spatial-temporal analysis of protein dynamics provides a mean to observe stochastic biochemical signaling in the context of the intracellular environment which may lead to better understanding of cancer cell invasion, stem cell differentiation and other fundamental biological processes. National Institutes of Health Grant P41-RRO3155.
Dubinsky, Eric A; Butkus, Steven R; Andersen, Gary L
2016-11-15
Sources of fecal indicator bacteria are difficult to identify in watersheds that are impacted by a variety of non-point sources. We developed a molecular source tracking test using the PhyloChip microarray that detects and distinguishes fecal bacteria from humans, birds, ruminants, horses, pigs and dogs with a single test. The multiplexed assay targets 9001 different 25-mer fragments of 16S rRNA genes that are common to the bacterial community of each source type. Both random forests and SourceTracker were tested as discrimination tools, with SourceTracker classification producing superior specificity and sensitivity for all source types. Validation with 12 different mammalian sources in mixtures found 100% correct identification of the dominant source and 84-100% specificity. The test was applied to identify sources of fecal indicator bacteria in the Russian River watershed in California. We found widespread contamination by human sources during the wet season proximal to settlements with antiquated septic infrastructure and during the dry season at beaches during intense recreational activity. The test was more sensitive than common fecal indicator tests that failed to identify potential risks at these sites. Conversely, upstream beaches and numerous creeks with less reliance on onsite wastewater treatment contained no fecal signal from humans or other animals; however these waters did contain high counts of fecal indicator bacteria after rain. Microbial community analysis revealed that increased E. coli and enterococci at these locations did not co-occur with common fecal bacteria, but rather co-varied with copiotrophic bacteria that are common in freshwaters with high nutrient and carbon loading, suggesting runoff likely promoted the growth of environmental strains of E. coli and enterococci. These results indicate that machine-learning classification of PhyloChip microarray data can outperform conventional single marker tests that are used to assess health risks, and is an effective tool for distinguishing numerous fecal and environmental sources of pathogen indicators. Copyright © 2016 Elsevier Ltd. All rights reserved.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
Dorman, Michael F; Natale, Sarah; Loiselle, Louise
2018-03-01
Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology
Methane - quick fix or tough target? New methods to reduce emissions.
NASA Astrophysics Data System (ADS)
Nisbet, E. G.; Lowry, D.; Fisher, R. E.; Brownlow, R.
2016-12-01
Methane is a cost-effective target for greenhouse gas reduction efforts. The UK's MOYA project is designed to improve understanding of the global methane budget and to point to new methods to reduce future emissions. Since 2007, methane has been increasing rapidly: in 2014 and 2015 growth was at rates last seen in the 1980s. Unlike 20thcentury growth, primarily driven by fossil fuel emissions in northern industrial nations, isotopic evidence implies present growth is driven by tropical biogenic sources such as wetlands and agriculture. Discovering why methane is rising is important. Schaefer et al. (Science, 2016) pointed out the potential clash between methane reduction efforts and food needs of a rising, better-fed (physically larger) human population. Our own work suggests tropical wetlands are major drivers of growth, responding to weather changes since 2007, but there is no acceptable way to reduce wetland emission. Just as sea ice decline indicates Arctic warming, methane may be the most obvious tracker of climate change in the wet tropics. Technical advances in instrumentation can do much in helping cut urban and industrial methane emissions. Mobile systems can be mounted on vehicles, while drone sampling can provide a 3D view to locate sources. Urban land planning often means large but different point sources are typically clustered (e.g. landfill or sewage plant near incinerator; gas wells next to cattle). High-precision grab-sample isotopic characterisation, using Keeling plots, can separate source signals, to identify specific emitters, even where they are closely juxtaposed. Our mobile campaigns in the UK, Kuwait, Hong Kong and E. Australia show the importance of major single sources, such as abandoned old wells, pipe leaks, or unregulated landfills. If such point sources can be individually identified, even when clustered, they will allow effective reduction efforts to occur: these can be profitable and/or improve industrial safety, for example in the case of gas leaks. Fossil fuels, landfills, waste, and biomass burning emit about 200 Tg/yr, or 35-40% of global methane emissions. Using inexpensive 3D mobile surveys coupled with high-precision isotopic measurement, it should be possible to cut emissions sharply, substantially reducing the methane burden even if tropical biogenic sources increase.
NASA Astrophysics Data System (ADS)
Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.
2016-12-01
Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.
NASA Astrophysics Data System (ADS)
Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.
2017-12-01
Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.
Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion
NASA Astrophysics Data System (ADS)
Hesser, T.; Farthing, M. W.; Brodie, K.
2016-02-01
The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ries, Paul A.
2012-05-01
The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.
DoD Life Cycle Management (LCM) and Product Support Manager (PSM) Rapid Deployment Training
2010-10-01
fielding, sustainment, and disposal of a DOD system across its life cycle.” (JCIDS Operation Manual) • “The PM shall be the single point of...devote more funds to development and procurement in order to modernize weapon systems . But, in fact, growth in operating and support costs has limited the...Requirements Differently could Reduce Weapon Systems ’ Total Ownership Costs The DoD “Death Spiral” (Source: Dr. Jacques S. Gansler, USD(A&T
High-performance semiconductor quantum-dot single-photon sources
NASA Astrophysics Data System (ADS)
Senellart, Pascale; Solomon, Glenn; White, Andrew
2017-11-01
Single photons are a fundamental element of most quantum optical technologies. The ideal single-photon source is an on-demand, deterministic, single-photon source delivering light pulses in a well-defined polarization and spatiotemporal mode, and containing exactly one photon. In addition, for many applications, there is a quantum advantage if the single photons are indistinguishable in all their degrees of freedom. Single-photon sources based on parametric down-conversion are currently used, and while excellent in many ways, scaling to large quantum optical systems remains challenging. In 2000, semiconductor quantum dots were shown to emit single photons, opening a path towards integrated single-photon sources. Here, we review the progress achieved in the past few years, and discuss remaining challenges. The latest quantum dot-based single-photon sources are edging closer to the ideal single-photon source, and have opened new possibilities for quantum technologies.
Moth-inspired navigation algorithm in a turbulent odor plume from a pulsating source.
Liberzon, Alexander; Harrington, Kyra; Daniel, Nimrod; Gurka, Roi; Harari, Ally; Zilman, Gregory
2018-01-01
Some female moths attract male moths by emitting series of pulses of pheromone filaments propagating downwind. The turbulent nature of the wind creates a complex flow environment, and causes the filaments to propagate in the form of patches with varying concentration distributions. Inspired by moth navigation capabilities, we propose a navigation strategy that enables a flier to locate an upwind pulsating odor source in a windy environment using a single threshold-based detection sensor. This optomotor anemotaxis strategy is constructed based on the physical properties of the turbulent flow carrying discrete puffs of odor and does not involve learning, memory, complex decision making or statistical methods. We suggest that in turbulent plumes from a pulsating point source, an instantaneously measurable quantity referred as a "puff crossing time", improves the success rate as compared to the navigation strategies based on temporally regular zigzags due to intermittent contact, or an "internal counter", that do not use this information. Using computer simulations of fliers navigating in turbulent plumes of the pulsating point source for varying flow parameters such as turbulent intensities, plume meandering and wind gusts, we obtained statistics of navigation paths towards the pheromone sources. We quantified the probability of a successful navigation as well as the flight parameters such as the time spent searching and the total flight time, with respect to different turbulent intensities, meandering or gusts. The concepts learned using this model may help to design odor-based navigation of miniature airborne autonomous vehicles.
Urodynamic catheter moisture sensor: A novel device to improve leak point pressure detection.
Marshall, Blake R; Arlen, Angela M; Kirsch, Andrew J
2016-06-01
High-quality urodynamic studies in patients with neurogenic lower urinary tract dysfunction are important, as UDS may be the only reliable gauge of potential risk for upper tract deterioration and the optimal tool to guide lower urinary tract management. Reliance on direct visualization of leakage during typical UDS remains a potential source of error. Given the necessity of accurate leak point pressures, we developed a wireless leak detection sensor to eliminate the need for visual inspection during UDS. A mean decrease in detrusor leak point pressure of 3 cm/H2 0 and a mean 11% decrease in capacity at leakage was observed when employing the sensor compared to visual inspection in children undergoing two fillings during a single UDS session. Removing the visual inspection component of UDS may improve accuracy of pressure readings. Neurourol. Urodynam. 35:647-648, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Coherent backscattering enhancement in cavities. Highlights of the role of symmetry.
Gallot, Thomas; Catheline, Stefan; Roux, Philippe
2011-04-01
Through experiments and simulations, the consequences of symmetry on coherent backscattering enhancement (CBE) are studied in cavities. Three main results are highlighted. First, the CBE outside the source is observed: (a) on a single symmetric point in a one-dimensional (1-D) cavity, in a disk and in a symmetric chaotic plate; (b) on three symmetric points in a two-dimensional (2-D) rectangle; and (c) on seven symmetric points in a three-dimensional (3-D) parallelepiped cavity. Second, the existence of enhanced intensity lines and planes in 2-D and 3-D simple-shape cavities is demonstrated. Third, it is shown how the anti-symmetry caused by the special boundary conditions is responsible for the existence of a coherent backscattering decrement with a dimensional dependence of R = (½)(d), with d = 1,2,3 as the dimensionality of the cavity.
NGST/XRCF Design and Build Wavescope System Pallet
NASA Technical Reports Server (NTRS)
Geary, Joe
1999-01-01
Based on the successful Wavescope demonstration at MSFC at the end of March, the decision was made by the optical testing team to purchase an upgraded Wavescope from AOA. The MSFC version would include: a higher resolution camera (1000 x 1000 pixels); a higher density lenslet array (150 x 150); updated software; and longer cables (to accommodate the remote operation of the Wavescope optical head which was resident in the Beam Guide Tube). The AOA proposal for the new instrument was received in mid-April, and delivered to MSFC in mid-July. A considerable amount of effort was expended to provide the infrastructure needed for Wavescope operation, and to incorporate it into the overall test system. This was provided by the Wavescope System Pallet (WSP) built by UAH. The WSP is illustrated. Several instruments are incorporated on this pallet. These include the: Wavescope optical head; a PDI wavefront sensor; a point spread function sensor; a Leica light-based distance measuring sensor. In addition there is a single mode fiber point source (fed from a separate source pallet) which serves both as a reference for the Wavescope and as a source point for the test mirror. There is a dual function lens which both collimates the beam from the test image point, and images the test mirror onto the lenslet array. There is a high quality Collimator which can provide a flat input wavefront directly into the Wavescope. There are also various aids such as an alignment laser, an alignment telescope, alignment sticks and apertures. The WSP was delivered to MSFC on 7/28/99. An picture shows the WSP installed in the Guide Tube at the X-Ray Calibration Facility (XRCF).
Techniques for grid manipulation and adaptation. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
Ion photon emission microscope
Doyle, Barney L.
2003-04-22
An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, Alan L.; Crist, Charles E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.
Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.
de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard
2018-03-01
Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog and its supplement, this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons
Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew CN; Swindale, Nicholas V; Murphy, Timothy H
2017-01-01
Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps. DOI: http://dx.doi.org/10.7554/eLife.19976.001 PMID:28160463
Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S
2016-04-15
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12} ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}] ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope
Ackermann, M.; Ajello, M.; Albert, A.; ...
2016-04-14
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; ...
2016-11-17
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.
2002-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
Can satellite-based monitoring techniques be used to quantify volcanic CO2 emissions?
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Carn, Simon A.; Kuze, Akihiko; Kataoka, Fumie; Shiomi, Kei; Goto, Naoki; Popp, Christoph; Ajiro, Masataka; Suto, Hiroshi; Takeda, Toru; Kanekon, Sayaka; Sealing, Christine; Flower, Verity
2014-05-01
Since 2010, we investigate and improve possible methods to regularly target volcanic centers from space in order to detect volcanic carbon dioxide (CO2) point source anomalies, using the Japanese Greenhouse gas Observing SATellite (GOSAT). Our long-term goals are: (a) better spatial and temporal coverage of volcano monitoring techniques; (b) improvement of the currently highly uncertain global CO2 emission inventory for volcanoes, and (c) use of volcanic CO2 emissions for high altitude, strong point source emission and dispersion studies in atmospheric science. The difficulties posed by strong relief, orogenic clouds, and aerosols are minimized by a small field of view, enhanced spectral resolving power, by employing repeat target mode observation strategies, and by comparison to continuous ground based sensor network validation data. GOSAT is a single-instrument Earth observing greenhouse gas mission aboard JAXA's IBUKI satellite in sun-synchronous polar orbit. GOSAT's Fourier-Transform Spectrometer (TANSO-FTS) has been producing total column XCO2 data since January 2009, at a repeat cycle of 3 days, offering great opportunities for temporal monitoring of point sources. GOSAT's 10 km field of view can spatially integrate entire volcanic edifices within one 'shot' in precise target mode. While it doesn't have any spatial scanning or mapping capability, it does have strong spectral resolving power and agile pointing capability to focus on several targets of interest per orbit. Sufficient uncertainty reduction is achieved through comprehensive in-flight vicarious calibration, in close collaboration between NASA and JAXA. Challenges with the on-board pointing mirror system have been compensated for employing custom observation planning strategies, including repeat sacrificial upstream reference points to control pointing mirror motion, empirical individualized target offset compensation, observation pattern simulations to minimize view angle azimuth. Since summer 2010 we have conducted repeated target mode observations of now almost 40 persistently active global volcanoes and other point sources, including Etna (Italy), Mayon (Philippines), Hawaii (USA), Popocatepetl (Mexico), and Ambrym (Vanuatu), using GOSAT FTS SWIR data. In this presentation we will summarize results from over three years of measurements and progress toward understanding detectability with this method. In emerging collaboration with the Deep Carbon Observatory's DECADE program, the World Organization of Volcano Observatories (WOVO) global database of volcanic unrest (WOVOdat), and country specific observatories and agencies we see a growing potential for ground based validation synergies. Complementing the ongoing GOSAT mission, NASA is on schedule to launch its OCO-2 satellite in July 2014, which will provide higher spatial but lower temporal resolution. Further orbiting and geostationary satellite sensors are in planning at JAXA, NASA, and ESA.
Perspectives on individual to ensembles of ambient fine and ultrafine particles and their sources
NASA Astrophysics Data System (ADS)
Bein, Keith James
By combining Rapid Single-ultrafine-particle Mass Spectrometry (RSMS) measurements during the Pittsburgh Supersite experiment with a large array of concurrent PM, gas and meteorological data, a synthesis of data and analyses is employed to characterize sources, emission trends and dynamics of ambient fine and ultrafine particles. Combinatorial analyses elicit individual to ensemble descriptions of particles, their sources, their changes in state from atmospheric processing and the scales of motion driving their transport and dynamics. Major results include (1) Particle size and composition are strong indicators of sources/source categories and real-time measurements allow source attribution at the single particle and point source level. (2) Single particle source attribution compares well to factor analysis of chemically-speciated bulk phase data and both resulted in similar conclusions but independently revealed new sources. (3) RSMS data can quantitatively estimate composition-resolved, number-based particle size distribution. Comparison to mass-based data yielded new information about physical and chemical properties of particles and instrument sensitivity. (4) Source-specific signatures and real-time monitoring allow passing plumes to be tracked and characterized. (5) The largest of three identified coal combustion sources emits ˜ 2.4 x 10 17 primary submicron particles per second. (6) Long-range transport has a significant impact on the eastern U.S. including specific influences of eight separate wildfire events. (7) Pollutant dynamics in the Pittsburgh summertime air shed, and Northeastern U.S., is characterized by alternating periods of stagnation and cleansing. The eight wildfire events were detected in between seven successive stagnation events. (8) Connections exist between boreal fire activity, southeast subsiding transport of the emissions, alternating periods of stagnation and cleansing at the receptor and the structure and propagation of extratropical waves. (9) Wildfire emissions can severely impact preexisting pollutant concentrations and physical and chemical processes at the receptor. (10) High-severity crown fires in boreal Canada emit ˜ 1.2 x 1015 particles/kg biomass burned. (11) In 1998, wildfire activity in the circumpolar boreal forest emitted ˜ 8 x 1026 particles, representing ˜ 14% of global wildland fire emissions. Results and conclusions address future scientific objectives in understanding effects of particles on human health and global climate change.
Strong RFI observed in protected 21 cm band at Zurich observatory, Switzerland
NASA Astrophysics Data System (ADS)
Monstein, C.
2014-03-01
While testing a new antenna control software tool, the telescope was moved to the most western azimuth position pointing to our own building. While de-accelerating the telescope, the spectrometer showed strong broadband radio frequency interference (RFI) and two single-frequency carriers around 1412 and 1425 MHz, both of which are in the internationally protected band. After lengthy analysis it was found out, that the Webcam AXIS2000 was the source for both the broadband and single-frequency interference. Switching off the Webcam solved the problem immediately. So, for future observations of 21 cm radiation, all nearby electronics has to be switched off. Not only the Webcam but also all unused PCs, printers, networks, monitors etc.
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...
2017-06-29
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Three-parameter optical studies in Scottish coastal waters
NASA Astrophysics Data System (ADS)
McKee, David; Cunningham, Alex; Jones, Ken
1997-02-01
A new submersible optical instrument has been constructed which allows chlorophyll fluorescence, attenuation and wide- angle scattering measurements to be made simultaneously at he same point in a body of water. The instrument sues a single xenon flashlamp as the light source, and incorporates its own power supply and microprocessor based data logging system. It has ben cross-calibrated against commercial single-parameter instruments using a range of non-algal particles and phytoplankton cultures. The equipment has been deployed at sea in the Firth of Clyde and Loch Linnhe, where is has been used to study seasonal variability in optical water column structure. Results will be presented to illustrate how ambiguity in the interpretation of measurements of a single optical parameter can be alleviated by measuring several parameters simultaneously. Comparative studies of differences in winter and spring relationships between optical variable shave also ben carried out.
Murakami, Tatsuya C; Mano, Tomoyuki; Saikawa, Shu; Horiguchi, Shuhei A; Shigeta, Daichi; Baba, Kousuke; Sekiya, Hiroshi; Shimizu, Yoshihiro; Tanaka, Kenji F; Kiyonari, Hiroshi; Iino, Masamitsu; Mochizuki, Hideki; Tainaka, Kazuki; Ueda, Hiroki R
2018-04-01
A three-dimensional single-cell-resolution mammalian brain atlas will accelerate systems-level identification and analysis of cellular circuits underlying various brain functions. However, its construction requires efficient subcellular-resolution imaging throughout the entire brain. To address this challenge, we developed a fluorescent-protein-compatible, whole-organ clearing and homogeneous expansion protocol based on an aqueous chemical solution (CUBIC-X). The expanded, well-cleared brain enabled us to construct a point-based mouse brain atlas with single-cell annotation (CUBIC-Atlas). CUBIC-Atlas reflects inhomogeneous whole-brain development, revealing a significant decrease in the cerebral visual and somatosensory cortical areas during postnatal development. Probabilistic activity mapping of pharmacologically stimulated Arc-dVenus reporter mouse brains onto CUBIC-Atlas revealed the existence of distinct functional structures in the hippocampal dentate gyrus. CUBIC-Atlas is shareable by an open-source web-based viewer, providing a new platform for whole-brain cell profiling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Macrander, A. T.
Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x - ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100 - micron thickness. The crystal was bonded to a fiber optic plat e (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self - supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnetmore » source of synchrotron radiation. The upstream , adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB - based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. Lastly, this single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.« less
Improving the Patron Experience: Sterling Memorial Library's Single Service Point
ERIC Educational Resources Information Center
Sider, Laura Galas
2016-01-01
This article describes the planning process and implementation of a single service point at Yale University's Sterling Memorial Library. While much recent scholarship on single service points (SSPs) has focused on the virtues or hazards of eliminating reference desks in libraries nationwide, this essay explores the ways in which single service…
Processing challenges in the XMM-Newton slew survey
NASA Astrophysics Data System (ADS)
Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego
2005-08-01
The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.
Analysis and modification of a single-mesh gear fatigue rig for use in diagnostic studies
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.; Townsend, Dennis P.; Oswald, Fred B.; Decker, Harry J.
1992-01-01
A single-mesh gear fatigue rig was analyzed and modified for use in gear mesh diagnostic research. The fatigue rig allowed unwanted vibration to mask the test-gear vibration signal, making it difficult to perform diagnostic studies. Several possible sources and factors contributing to the unwanted components of the vibration signal were investigated. Sensor mounting location was found to have a major effect on the content of the vibration signal. In the presence of unwanted vibration sources, modal amplification made unwanted components strong. A sensor location was found that provided a flatter frequency response. This resulted in a more useful vibration signal. A major network was performed on the fatigue rig to reduce the influence of the most probable sources of the noise in the vibration signal. The slave gears were machined to reduce weight and increase tooth loading. The housing and the shafts were modified to reduce imbalance, looseness, and misalignment in the rotating components. These changes resulted in an improved vibration signal, with the test-gear mesh frequency now the dominant component in the signal. Also, with the unwanted sources eliminated, the sensor mounting location giving the most robust representation of the test-gear meshing energy was found to be at a point close to the test gears in the load zone of the bearings.
Toxic metals in Venics lagoon sediments: Model, observation, an possible removal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, A.; Molinaroli, E.
1994-11-01
We have modeled the distribution of nine toxic metals in the surface sediments from 163 stations in the Venice lagoon using published data. Three entrances from the Adriatic Sea control the circulation in the lagoon and divide it into three basins. We assume, for purposes of modeling, that Porto Marghera at the head of the Industrial Zone area is the single source of toxic metals in the Venice lagoon. In a standing body of lagoon water, concentration of pollutants at distance x from the source (C{sub 0}) may be given by C=C{sub 0}e{sup -kx} where k is the rate constantmore » of dispersal. We calculated k empirically using concentrations at the source, and those farthest from it, that is the end points of the lagoon. Average k values (ppm/km) in the lagoon are: Zn 0.165, Cd 0.116, Hg 0.110, Cu 0.105, Co 0.072, Pb 0.058, Ni 0.008, Cr (0.011) and Fe (0.018 percent/km), and they have complex distributions. Given the k values, concentration at source (C{sub 0}), and the distance x of any point in the lagoon from the source, we have calculated the model concentrations of the nine metals at each sampling station. Tides, currents, floor morphology, additional sources, and continued dumping perturb model distributions causing anomalies (observed minus model concentrations). Positive anomalies are found near the source, where continued dumping perturbs initial boundary conditions, and in areas of sluggish circulation. Negative anomalies are found in areas with strong currents that may flush sediments out of the lagoon. We have thus identified areas in the lagoon where higher rate of sediment removal and exchange may lesson pollution. 41 refs., 4 figs., 3 tabs.« less
X-Ray Diffraction Wafer Mapping Method for Rhombohedral Super-Hetero-Epitaxy
NASA Technical Reports Server (NTRS)
Park, Yoonjoon; Choi, Sang Hyouk; King, Glen C.; Elliott, James R.; Dimarcantonio, Albert L.
2010-01-01
A new X-ray diffraction (XRD) method is provided to acquire XY mapping of the distribution of single crystals, poly-crystals, and twin defects across an entire wafer of rhombohedral super-hetero-epitaxial semiconductor material. In one embodiment, the method is performed with a point or line X-ray source with an X-ray incidence angle approximating a normal angle close to 90 deg, and in which the beam mask is preferably replaced with a crossed slit. While the wafer moves in the X and Y direction, a narrowly defined X-ray source illuminates the sample and the diffracted X-ray beam is monitored by the detector at a predefined angle. Preferably, the untilted, asymmetric scans are of {440} peaks, for twin defect characterization.
Acoustic Source Modeling for High Speed Air Jets
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Khavaran, Abbas
2005-01-01
The far field acoustic spectra at 90deg to the downstream axis of some typical high speed jets are calculated from two different forms of Lilley s equation combined with some recent measurements of the relevant turbulent source function. These measurements, which were limited to a single point in a low Mach number flow, were extended to other conditions with the aid of a highly developed RANS calculation. The results are compared with experimental data over a range of Mach numbers. Both forms of the analogy lead to predictions that are in excellent agreement with the experimental data at subsonic Mach numbers. The agreement is also fairly good at supersonic speeds, but the data appears to be slightly contaminated by shock-associated noise in this case.
Near-field interferometry of a free-falling nanoparticle from a point-like source
NASA Astrophysics Data System (ADS)
Bateman, James; Nimmrichter, Stefan; Hornberger, Klaus; Ulbricht, Hendrik
2014-09-01
Matter-wave interferometry performed with massive objects elucidates their wave nature and thus tests the quantum superposition principle at large scales. Whereas standard quantum theory places no limit on particle size, alternative, yet untested theories—conceived to explain the apparent quantum to classical transition—forbid macroscopic superpositions. Here we propose an interferometer with a levitated, optically cooled and then free-falling silicon nanoparticle in the mass range of one million atomic mass units, delocalized over >150 nm. The scheme employs the near-field Talbot effect with a single standing-wave laser pulse as a phase grating. Our analysis, which accounts for all relevant sources of decoherence, indicates that this is a viable route towards macroscopic high-mass superpositions using available technology.
FTMP (Fault Tolerant Multiprocessor) programmer's manual
NASA Technical Reports Server (NTRS)
Feather, F. E.; Liceaga, C. A.; Padilla, P. A.
1986-01-01
The Fault Tolerant Multiprocessor (FTMP) computer system was constructed using the Rockwell/Collins CAPS-6 processor. It is installed in the Avionics Integration Research Laboratory (AIRLAB) of NASA Langley Research Center. It is hosted by AIRLAB's System 10, a VAX 11/750, for the loading of programs and experimentation. The FTMP support software includes a cross compiler for a high level language called Automated Engineering Design (AED) System, an assembler for the CAPS-6 processor assembly language, and a linker. Access to this support software is through an automated remote access facility on the VAX which relieves the user of the burden of learning how to use the IBM 4381. This manual is a compilation of information about the FTMP support environment. It explains the FTMP software and support environment along many of the finer points of running programs on FTMP. This will be helpful to the researcher trying to run an experiment on FTMP and even to the person probing FTMP with fault injections. Much of the information in this manual can be found in other sources; we are only attempting to bring together the basic points in a single source. If the reader should need points clarified, there is a list of support documentation in the back of this manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Masakazu A. R.; Taniguchi, Yoshiaki; Kajisawa, Masaru
2016-03-01
We investigate morphological properties of 61 Lyα emitters (LAEs) at z = 4.86 identified in the COSMOS field, based on Hubble Space Telescope Advanced Camera for Surveys (ACS) imaging data in the F814W band. Out of the 61 LAEs, we find the ACS counterparts for 54 LAEs. Eight LAEs show double-component structures with a mean projected separation of 0.″63 (∼4.0 kpc at z = 4.86). Considering the faintness of these ACS sources, we carefully evaluate their morphological properties, that is, size and ellipticity. While some of them are compact and indistinguishable from the point-spread function (PSF) half-light radius of 0.″07 (∼0.45 kpc),more » the others are clearly larger than the PSF size and spatially extended up to 0.″3 (∼1.9 kpc). We find that the ACS sources show a positive correlation between ellipticity and size and that the ACS sources with large size and round shape are absent. Our Monte Carlo simulation suggests that the correlation can be explained by (1) the deformation effects via PSF broadening and shot noise or (2) the source blending in which two or more sources with small separation are blended in our ACS image and detected as a single elongated source. Therefore, the 46 single-component LAEs could contain the sources that consist of double (or multiple) components with small spatial separation (i.e., ≲0.″3 or 1.9 kpc). Further observation with high angular resolution at longer wavelengths (e.g., rest-frame wavelengths of ≳4000 Å) is inevitable to decipher which interpretation is adequate for our LAE sample.« less
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
NASA Astrophysics Data System (ADS)
Ohminato, T.; Kobayashi, T.; Ida, Y.; Fujita, E.
2006-12-01
During the 2000 Miyake-jima volcanic activity started on 26 June 2000, an intense earthquake swarm occurred initially beneath the southwest flank near the summit and gradually migrated west of the island. A volcanic earthquake activity in the island was reactivated beneath the summit, leading to a summit eruption with a significant summit subsidence on 8 July. We detected small but numerous number of long period (LP) seismic signals during these activities. Most of them include both 0.2 and 0.4 Hz components suggesting an existence of a harmonic oscillator. Some of them have dominant frequency peak at 0.2Hz (LP1), while others have one at 0.4 Hz (LP2). At the beginning of each waveform of both LP1 and LP2, an impulsive signal with a pulse-width of about 2 s is clearly identified. The major axis of the particle motion for the initial impulsive signal is almost horizontal suggesting a shallow source beneath the summit, while the inclined particle motion for the latter phase suggests deeper source beneath the island. For both LP1 and LP2, we can identify a clear positive correlation between the amplitude of the initial pulse and that of the latter phase. We conducted waveform inversions for the LP events assuming a point source and determined the locations and mechanisms simultaneously. We assumed three types of source mechanisms; three single forces, six moment tensor components, and a combination of moment tensor and single forces. We used AIC to decide the optimal solutions. Firstly, we applied the method to the entire waveform including both the initial pulse and the latter phase. The source type with a combination of moment tensor and single force components yields the minimum values of the AIC for both LP events. However, the spatial distribution of the residual errors tends to have two local minima. Considering the error distribution and the characteristic particle motions, it is likely that the source of the LP event consists of two different parts. We thus divided the LP events into two parts; the initial and the latter phases, and applied the same waveform inversion procedure separately for each part of the waveform. The inversion results show that the initial impulsive phase and the latter oscillatory phase are well explained by a nearly horizontal single force and a moment solution, respectively. The single force solutions of the initial pulse are positioned at the depth of about 2 km beneath the summit. The single force initially oriented to the north, and then to the south. On the other hand, the sources of the moment solutions are significantly deeper than the single force solutions. The hypocenter of the later phase of LP1 is located at the depth of 5.5 km in the southern region of the island, while that for the LP2 event is at 5.1 km beneath the summit. The horizontal oscillations are relatively dominant for both the LP1 and LP2 events. Although the two sources are separated each other by several kilometers, the positive correlation between the amplitudes of the initial pulse and the latter phase strongly suggests that the shallow sources trigger the deeper sources. The source time histories of the 6 moment tensor components of the latter portion of the LP1 and LP2 are not in phase. This makes it difficult to extract information on source geometry using the amplitude ratio among moment tensor components in a traditional manner. It may suggest that the source is composed of two independent sources whose oscillations are out of phase.
Low cost sensing of vegetation volume and structure with a Microsoft Kinect sensor
NASA Astrophysics Data System (ADS)
Azzari, G.; Goulden, M.
2011-12-01
The market for videogames and digital entertainment has decreased the cost of advanced technology to affordable levels. The Microsoft Kinect sensor for Xbox 360 is an infrared time of flight camera designed to track body position and movement at a single-articulation level. Using open source drivers and libraries, we acquired point clouds of vegetation directly from the Kinect sensor. The data were filtered for outliers, co-registered, and cropped to isolate the plant of interest from the surroundings and soil. The volume of single plants was then estimated with several techniques, including fitting with solid shapes (cylinders, spheres, boxes), voxel counts, and 3D convex/concave hulls. Preliminary results are presented here. The volume of a series of wild artichoke plants was measured from nadir using a Kinect on a 3m-tall tower. The calculated volumes were compared with harvested biomass; comparisons and derived allometric relations will be presented, along with examples of the acquired point clouds. This Kinect sensor shows promise for ground-based, automated, biomass measurement systems, and possibly for comparison/validation of remotely sensed LIDAR.
A double-observer approach for estimating detection probability and abundance from point counts
Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.
2000-01-01
Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
Priority arbitration mechanism
Garmire, Derrick L [Kingston, NY; Herring, Jay R [Poughkeepsie, NY; Stunkel, Craig B [Bethel, CT
2007-03-06
A method is provided for selecting a data source for transmission on one of several logical (virtual) lanes embodied in a single physical connection. Lanes are assigned to either a high priority class or to a low priority class. One of six conditions is employed to determine when re-arbitration of lane priorities is desired. When this occurs a next source for transmission is selected based on a the specification of the maximum number of high priority packets that can be sent after a lower priority transmission has been interrupted. Alternatively, a next source for transmission is selected based on a the specification of the maximum number of high priority packets that can be sent while a lower priority packet is waiting. If initialized correctly, the arbiter keeps all of the packets of a high priority packet contiguous, while allowing lower priority packets to be interrupted by the higher priority packets, but not to the point of starvation of the lower priority packets.
The suite of small-angle neutron scattering instruments at Oak Ridge National Laboratory
Heller, William T.; Cuneo, Matthew J.; Debeer-Schmitt, Lisa M.; ...
2018-02-21
Oak Ridge National Laboratory is home to the High Flux Isotope Reactor (HFIR), a high-flux research reactor, and the Spallation Neutron Source (SNS), the world's most intense source of pulsed neutron beams. The unique co-localization of these two sources provided an opportunity to develop a suite of complementary small-angle neutron scattering instruments for studies of large-scale structures: the GP-SANS and Bio-SANS instruments at the HFIR and the EQ-SANS and TOF-USANS instruments at the SNS. This article provides an overview of the capabilities of the suite of instruments, with specific emphasis on how they complement each other. As a result, amore » description of the plans for future developments including greater integration of the suite into a single point of entry for neutron scattering studies of large-scale structures is also provided.« less
The suite of small-angle neutron scattering instruments at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heller, William T.; Cuneo, Matthew J.; Debeer-Schmitt, Lisa M.
Oak Ridge National Laboratory is home to the High Flux Isotope Reactor (HFIR), a high-flux research reactor, and the Spallation Neutron Source (SNS), the world's most intense source of pulsed neutron beams. The unique co-localization of these two sources provided an opportunity to develop a suite of complementary small-angle neutron scattering instruments for studies of large-scale structures: the GP-SANS and Bio-SANS instruments at the HFIR and the EQ-SANS and TOF-USANS instruments at the SNS. This article provides an overview of the capabilities of the suite of instruments, with specific emphasis on how they complement each other. As a result, amore » description of the plans for future developments including greater integration of the suite into a single point of entry for neutron scattering studies of large-scale structures is also provided.« less
Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach
NASA Astrophysics Data System (ADS)
Yan, Wen; Shelley, Michael
2018-02-01
An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
An ultrabright and monochromatic electron point source made of a LaB6 nanowire
NASA Astrophysics Data System (ADS)
Zhang, Han; Tang, Jie; Yuan, Jinshi; Yamauchi, Yasushi; Suzuki, Taku T.; Shinya, Norio; Nakajima, Kiyomi; Qin, Lu-Chang
2016-03-01
Electron sources in the form of one-dimensional nanotubes and nanowires are an essential tool for investigations in a variety of fields, such as X-ray computed tomography, flexible displays, chemical sensors and electron optics applications. However, field emission instability and the need to work under high-vacuum or high-temperature conditions have imposed stringent requirements that are currently limiting the range of application of electron sources. Here we report the fabrication of a LaB6 nanowire with only a few La atoms bonded on the tip that emits collimated electrons from a single point with high monochromaticity. The nanostructured tip has a low work function of 2.07 eV (lower than that of Cs) while remaining chemically inert, two properties usually regarded as mutually exclusive. Installed in a scanning electron microscope (SEM) field emission gun, our tip shows a current density gain that is about 1,000 times greater than that achievable with W(310) tips, and no emission decay for tens of hours of operation. Using this new SEM, we acquired very low-noise, high-resolution images together with rapid chemical compositional mapping using a tip operated at room temperature and at 10-times higher residual gas pressure than that required for W tips.
Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations
NASA Astrophysics Data System (ADS)
Hopper, Seth
2018-03-01
Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.
Triangulating the source of tunneling resonances in a point contact with nanometer scale sensitivity
NASA Astrophysics Data System (ADS)
Bishop, N. C.; Boras Pinilla, C.; Stalford, H. L.; Young, R. W.; Ten Eyck, G. A.; Wendt, J. R.; Eng, K.; Lilly, M. P.; Carroll, M. S.
2011-03-01
We observe resonant tunneling in split gate point contacts defined in a double gate enhancement mode Si-MOS device structure. We determine the capacitances from the resonant feature to each of the conducting gates and the source/drain two dimensional electron gas regions. In our device, these capacitances provide information about the resonance location in three dimensions. Semi-classical electrostatic simulations of capacitance, already used to map quantum dot size and position [Stalford et al., IEEE Nanotechnology], identify a combination of location and confinement potential size that satisfy our experimental observations. The sensitivity of simulation to position and size allow us to triangulate possible locations of the resonant level with nanometer resolution. We discuss our results and how they may apply to resonant tunneling through a single donor. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Quantum key distribution in a multi-user network at gigahertz clock rates
NASA Astrophysics Data System (ADS)
Fernandez, Veronica; Gordon, Karen J.; Collins, Robert J.; Townsend, Paul D.; Cova, Sergio D.; Rech, Ivan; Buller, Gerald S.
2005-07-01
In recent years quantum information research has lead to the discovery of a number of remarkable new paradigms for information processing and communication. These developments include quantum cryptography schemes that offer unconditionally secure information transport guaranteed by quantum-mechanical laws. Such potentially disruptive security technologies could be of high strategic and economic value in the future. Two major issues confronting researchers in this field are the transmission range (typically <100km) and the key exchange rate, which can be as low as a few bits per second at long optical fiber distances. This paper describes further research of an approach to significantly enhance the key exchange rate in an optical fiber system at distances in the range of 1-20km. We will present results on a number of application scenarios, including point-to-point links and multi-user networks. Quantum key distribution systems have been developed, which use standard telecommunications optical fiber, and which are capable of operating at clock rates of up to 2GHz. They implement a polarization-encoded version of the B92 protocol and employ vertical-cavity surface-emitting lasers with emission wavelengths of 850 nm as weak coherent light sources, as well as silicon single-photon avalanche diodes as the single photon detectors. The point-to-point quantum key distribution system exhibited a quantum bit error rate of 1.4%, and an estimated net bit rate greater than 100,000 bits-1 for a 4.2 km transmission range.
Single photon source with individualized single photon certifications
NASA Astrophysics Data System (ADS)
Migdall, Alan L.; Branning, David A.; Castelletto, Stefania; Ware, M.
2002-12-01
As currently implemented, single-photon sources cannot be made to produce single photons with high probability, while simultaneously suppressing the probability of yielding two or more photons. Because of this, single photon sources cannot really produce single photons on demand. We describe a multiplexed system that allows the probabilities of producing one and more photons to be adjusted independently, enabling a much better approximation of a source of single photons on demand. The scheme uses a heralded photon source based on parametric downconversion, but by effectively breaking the trigger detector area into multiple regions, we are able to extract more information about a heralded photon than is possible with a conventional arrangement. This scheme allows photons to be produced along with a quantitative 'certification' that they are single photons. Some of the single-photon certifications can be significantly better than what is possible with conventional downconversion sources, as well as being better than faint laser sources. With such a source of more tightly certified single photons, it should be possible to improve the maximum secure bit rate possible over a quantum cryptographic link. We present an analysis of the relative merits of this method over the conventional arrangement.
NASA Technical Reports Server (NTRS)
Banger, Kulbinder K. (Inventor); Hepp, Aloysius F. (Inventor); Harris, Jerry D. (Inventor); Jin, Michael Hyun-Chul (Inventor); Castro, Stephanie L. (Inventor)
2006-01-01
A single source precursor for depositing ternary I-III-VI.sub.2 chalcopyrite materials useful as semiconductors. The single source precursor has the I-III-VI.sub.2 stoichiometry built into a single precursor molecular structure which degrades on heating or pyrolysis to yield the desired I-III-VI.sub.2 ternary chalcopyrite. The single source precursors effectively degrade to yield the ternary chalcopyrite at low temperature, e.g. below 500.degree. C., and are useful to deposit thin film ternary chalcopyrite layers via a spray CVD technique. The ternary single source precursors according to the invention can be used to provide nanocrystallite structures useful as quantum dots. A method of making the ternary single source precursors is also provided.
Toward a New Paradigm for the Unification of Radio Loud AGN and its Connection to Accretion
NASA Technical Reports Server (NTRS)
Georganpoulos, Markos; Meyer, Eileen T.; Fossati, Giovanni; Lister, Matthew L.
2012-01-01
We recently argued [21J that the collective properties. of radio loud active galactic nuclei point to the existence of two families of sources, one of powerful sources with single velocity jets and one of weaker jets with significant velocity gradients in the radiating plasma. These families also correspond to different accretion modes and therefore different thermal and emission line intrinsic properties: powerful sources have radiatively efficient accretion disks, while in weak sources accretion must be radiatively inefficient. Here, after we briefly review of our recent work, we present the following findings that support our unification scheme: (i) along the broken sequence of aligned objects, the jet kinetic power increases. (ii) in the powerful branch of the sequence of aligned objects the fraction of BLLs decreases with increasing jet power. (iii) for powerful sources, the fraction of BLLs increases for more un-aligned objects, as measured by the core to extended radio emission. Our results are also compatible with the possibility that a given accretion power produces jets of comparable kinetic power.
Energy storage requirements of dc microgrids with high penetration renewables under droop control
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...
2015-01-09
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, A.L.; Crist, C.E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages. 1 fig.
Transient Point Infiltration In The Unsaturated Zone
NASA Astrophysics Data System (ADS)
Buecker-Gittel, M.; Mohrlok, U.
The risk assessment of leaking sewer pipes gets more and more important due to urban groundwater management and environmental as well as health safety. This requires the quantification and balancing of transport and transformation processes based on the water flow in the unsaturated zone. The water flow from a single sewer leakage could be described as a point infiltration with time varying hydraulic conditions externally and internally. External variations are caused by the discharge in the sewer pipe as well as the state of the leakage itself. Internal variations are the results of microbiological clogging effects associated with the transformation processes. Technical as well as small scale laboratory experiments were conducted in order to investigate the water transport from an transient point infiltration. From the technical scale experiment there was evidence that the water flow takes place under transient conditions when sewage infiltrates into an unsaturated soil. Whereas the small scale experiments investigated the hydraulics of the water transport and the associated so- lute and particle transport in unsaturated soils in detail. The small scale experiment was a two-dimensional representation of such a point infiltration source where the distributed water transport could be measured by several tensiometers in the soil as well as by a selective measurement of the discharge at the bottom of the experimental setup. Several series of experiments were conducted varying the boundary and initial con- ditions in order to derive the important parameters controlling the infiltration of pure water from the point source. The results showed that there is a significant difference between the infiltration rate in the point source and the discharge rate at the bottom, that could be explained by storage processes due to an outflow resistance at the bottom. This effect is overlayn by a decreasing water content decreases over time correlated with a decreasing infiltration rate. As expected the initial conditions mainly affects the time scale for the water transport. Additionally, the influence of preferential flow paths on the discharge distribution could be found due to the heterogenieties caused by the filling and compaction process of the sandy soil.
NASA Astrophysics Data System (ADS)
Fioletov, Vitali; McLinden, Chris A.; Kharol, Shailesh K.; Krotkov, Nickolay A.; Li, Can; Joiner, Joanna; Moran, Michael D.; Vet, Robert; Visschedijk, Antoon J. H.; Denier van der Gon, Hugo A. C.
2017-10-01
Reported sulfur dioxide (SO2) emissions from US and Canadian sources have declined dramatically since the 1990s as a result of emission control measures. Observations from the Ozone Monitoring Instrument (OMI) on NASA's Aura satellite and ground-based in situ measurements are examined to verify whether the observed changes from SO2 abundance measurements are quantitatively consistent with the reported changes in emissions. To make this connection, a new method to link SO2 emissions and satellite SO2 measurements was developed. The method is based on fitting satellite SO2 vertical column densities (VCDs) to a set of functions of OMI pixel coordinates and wind speeds, where each function represents a statistical model of a plume from a single point source. The concept is first demonstrated using sources in North America and then applied to Europe. The correlation coefficient between OMI-measured VCDs (with a local bias removed) and SO2 VCDs derived here using reported emissions for 1° by 1° gridded data is 0.91 and the best-fit line has a slope near unity, confirming a very good agreement between observed SO2 VCDs and reported emissions. Having demonstrated their consistency, seasonal and annual mean SO2 VCD distributions are calculated, based on reported point-source emissions for the period 1980-2015, as would have been seen by OMI. This consistency is further substantiated as the emission-derived VCDs also show a high correlation with annual mean SO2 surface concentrations at 50 regional monitoring stations.
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
Options and limitations for bromate control during ozonation of wastewater.
Soltermann, Fabian; Abegglen, Christian; Tschui, Manfred; Stahel, Sandro; von Gunten, Urs
2017-06-01
Wastewater treatment plants (WWTPs) are important point sources for micropollutants, which are harmful to freshwater organisms. Ozonation of wastewater is a powerful option to abate micropollutants, but may result in the formation of the potentially toxic oxidation by-product bromate in bromide-containing wastewaters. This study investigates options to reduce bromate formation during wastewater ozonation by (i) reducing the bromide concentration of the wastewater, (ii) lowering the ozone dose during wastewater treatment and (iii) adding hydrogen peroxide to limit the lifetime of ozone and quench the intermediates of the bromate formation pathway. Two examples demonstrate that a high share of bromide in wastewater can originate from single point sources (e.g., municipal waste incinerators or landfills). The identification of major point sources requires laborious sampling campaigns, but may facilitate the reduction of the bromide load significantly. To reduce the bromate formation by lowering the ozone dose interferes with the aim to abate micropollutants. Therefore, an additional treatment is necessary to ensure the elimination of micropollutants. Experiments at a pilot-plant illustrate that a combined treatment (ozone/powdered activated carbon) allows to eliminate micropollutants with low bromate yields. Furthermore, the addition of hydrogen peroxide was investigated at bench-scale. The bromate yields could be reduced by ∼50% and 65% for a hydrogen peroxide dose of 5 and 10 mg L -1 , respectively. In conclusion, there are options to reduce the bromate formation during wastewater ozonation, however, they are not simple with sometimes limited efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
1980-09-01
where 4BD represents the instantaneous effect of the body, while OFS represents the free surface disturbance generated by the body over all previous...acceleration boundary condition. This deter- mines the time-derivative of the body-induced component of the flow, 4BD (as well as OBD through integration...panel with uniform density ei acting over a surface of area Ai is replaced by a single point source with strength s i(t) - A i(a i(t n ) + (t-t n ) G( td
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.
2012-12-01
The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
NASA Technical Reports Server (NTRS)
1976-01-01
NASA's Jet Propulsion Laboratory has come up with a technique to decrease exposure to harmful x-rays in mammographies or breast radiography. Usually, physicians make more than one exposure to arrive at an x-ray film of acceptable density. Now the same solar cells used to convert sunlight into electricity on space satellites can make a single exposure sufficient. When solar cell sensor is positioned directly beneath x-ray film, it can determine exactly when film has received sufficient radiation and has been exposed to optimum density. At that point associated electronic equipment sends signal to cut off x-ray source. Reduction of mammography to single exposures not only reduced x-ray hazard significantly, but doubled the number of patient examinations handled by one machine. The NASA laboratory used this control system at the Huntington Memorial Hospital with overwhelming success.
Satellite-based quantum communication terminal employing state-of-the-art technology
NASA Astrophysics Data System (ADS)
Pfennigbauer, Martin; Aspelmeyer, Markus; Leeb, Walter R.; Baister, Guy; Dreischer, Thomas; Jennewein, Thomas; Neckamm, Gregor; Perdigues, Josep M.; Weinfurter, Harald; Zeilinger, Anton
2005-09-01
Feature Issue on Optical Wireless Communications (OWC) We investigate the design and the accommodation of a quantum communication transceiver in an existing classical optical communication terminal on board a satellite. Operation from a low earth orbit (LEO) platform (e.g., the International Space Station) would allow transmission of single photons and pairs of entangled photons to ground stations and hence permit quantum communication applications such as quantum cryptography on a global scale. Integration of a source generating entangled photon pairs and single-photon detection into existing optical terminal designs is feasible. Even more, major subunits of the classical terminals such as those for pointing, acquisition, and tracking as well as those providing the required electronic, thermal, and structural backbone can be adapted so as to meet the quantum communication terminal needs.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog (Thompson et al. 1995) and its supplement (Thompson et al. 1996), this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
Investigation on RGB laser source applied to dynamic photoelastic experiment
NASA Astrophysics Data System (ADS)
Li, Songgang; Yang, Guobiao; Zeng, Weiming
2014-06-01
When the elastomer sustains the shock load or the blast load, its internal stress state of every point will change rapidly over time. Dynamic photoelasticity method is an experimental stress analysis method, which researches the dynamic stress and the stress wave propagation. Light source is one of very important device in dynamic photoelastic experiment system, and the RGB laser light source applied in dynamic photoelastic experiment system is innovative and evolutive to the system. RGB laser is synthesized by red laser, green laser and blue laser, either as a single wavelength laser light source, also as synthesized white laser light source. RGB laser as a light source for dynamic photoelastic experiment system, the colored isochromatic can be captured in dynamic photoelastic experiment, and even the black zero-level stripe can be collected, and the isoclinics can also be collected, which conducively analysis and study of transient stress and stress wave propagation. RGB laser is highly stable and continuous output, and its power can be adjusted. The three wavelengths laser can be synthesized by different power ratio. RGB laser light source used in dynamic photoelastic experiment has overcome a number of deficiencies and shortcomings of other light sources, and simplifies dynamic photoelastic experiment, which has achieved good results.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Perspex machine: V. Compilation of C programs
NASA Astrophysics Data System (ADS)
Spanner, Matthew P.; Anderson, James A. D. W.
2006-01-01
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
NASA Astrophysics Data System (ADS)
Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding
2017-04-01
Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieza, Lucas A.; Mathews, Geoffrey S.; Kraus, Adam L.
We present deep Sparse Aperture Masking (SAM) observations obtained with the ESO Very Large Telescope of the pre-transitional disk object FL Cha (SpT = K8, d = 160 pc), the disk of which is known to have a wide optically thin gap separating optically thick inner and outer disk components. We find non-zero closure phases, indicating a significant flux asymmetry in the K{sub S} -band emission (e.g., a departure from a single point source detection). We also present radiative transfer modeling of the spectral energy distribution of the FL Cha system and find that the gap extends from 0.06{sup +0.05}{submore » -0.01} AU to 8.3 {+-} 1.3 AU. We demonstrate that the non-zero closure phases can be explained almost equally well by starlight scattered off the inner edge of the outer disk or by a (sub)stellar companion. Single-epoch, single-wavelength SAM observations of transitional disks with large cavities that could become resolved should thus be interpreted with caution, taking the disk and its properties into consideration. In the context of a binary model, the signal is most consistent with a high-contrast ({Delta}K{sub S} {approx} 4.8 mag) source at a {approx}40 mas (6 AU) projected separation. However, the flux ratio and separation parameters remain highly degenerate and a much brighter source ({Delta}K{sub S} {approx} 1 mag) at 15 mas (2.4 AU) can also reproduce the signal. Second-epoch, multi-wavelength observations are needed to establish the nature of the SAM detection in FL Cha.« less
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
A singly charged ion source for radioactive {sup 11}C ion acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katagiri, K.; Noda, A.; Nagatsu, K.
2016-02-15
A new singly charged ion source using electron impact ionization has been developed to realize an isotope separation on-line system for simultaneous positron emission tomography imaging and heavy-ion cancer therapy using radioactive {sup 11}C ion beams. Low-energy electron beams are used in the electron impact ion source to produce singly charged ions. Ionization efficiency was calculated in order to decide the geometric parameters of the ion source and to determine the required electron emission current for obtaining high ionization efficiency. Based on these considerations, the singly charged ion source was designed and fabricated. In testing, the fabricated ion source wasmore » found to have favorable performance as a singly charged ion source.« less
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Ion release from, and fluoride recharge of a composite with a fluoride-containing bioactive glass.
Davis, Harry B; Gwinner, Fernanda; Mitchell, John C; Ferracane, Jack L
2014-10-01
Materials that are capable of releasing ions such as calcium and fluoride, that are necessary for remineralization of dentin and enamel, have been the topic of intensive research for many years. The source of calcium has most often been some form of calcium phosphate, and that for fluoride has been one of several metal fluoride or hexafluorophosphate salts. Fluoride-containing bioactive glass (BAG) prepared by the sol-gel method acts as a single source of both calcium and fluoride ions in aqueous solutions. The objective of this investigation was to determine if BAG, when added to a composite formulation, can be used as a single source for calcium and fluoride ion release over an extended time period, and to determine if the BAG-containing composite can be recharged upon exposure to a solution of 5000ppm fluoride. BAG 61 (61% Si; 31% Ca; 4% P; 3% F; 1% B) and BAG 81 (81% Si; 11% Ca; 4% P; 3% F; 1% B) were synthesized by the sol-gel method. The composite used was composed of 50/50 Bis-GMA/TEGDMA, 0.8% EDMAB, 0.4% CQ, and 0.05% BHT, combined with a mixture of BAG (15%) and strontium glass (85%) to a total filler load of 72% by weight. Disks were prepared, allowed to age for 24h, abraded, then placed into DI water. Calcium and fluoride release was measured by atomic absorption spectroscopy and fluoride ion selective electrode methods, respectively, after 2, 22, and 222h. The composite samples were then soaked for 5min in an aqueous 5000ppm fluoride solution, after which calcium and fluoride release was again measured at 2, 22, and 222h time points. Prior to fluoride recharge, release of fluoride ions was similar for the BAG 61 and BAG 81 composites after 2h, and also similar after 22h. At the four subsequent time points, one prior to, and three following fluoride recharge, the BAG 81 composite released significantly more fluoride ions (p<0.05). Both composites were recharged by exposure to 5000ppm fluoride, although the BAG 81 composite was recharged more than the BAG 61 composite. The BAG 61 composite released substantially more calcium ions prior to fluoride recharge during each of the 2 and 22h time periods. Thereafter, the release of calcium at the four subsequent time points was not significantly different (p>0.05) for the two composites. These results show that, when added to a composite formulation, fluoride-containing bioactive glass made by the sol-gel route can function as a single source for both calcium and fluoride ions, and that the composite can be readily recharged with fluoride. Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Single Fluorescent Molecules as Nano-Illuminators for Biological Structure and Function
NASA Astrophysics Data System (ADS)
Moerner, W. E.
2011-03-01
Since the first optical detection and spectroscopy of a single molecule in a solid (Phys. Rev. Lett. {62}, 2535 (1989)), much has been learned about the ability of single molecules to probe local nanoenvironments and individual behavior in biological and nonbiological materials in the absence of ensemble averaging that can obscure heterogeneity. Because each single fluorophore acts a light source roughly 1 nm in size, microscopic imaging of individual fluorophores leads naturally to superlocalization, or determination of the position of the molecule with precision beyond the optical diffraction limit, simply by digitization of the point-spread function from the single emitter. For example, the shape of single filaments in a living cell can be extracted simply by allowing a single molecule to move through the filament (PNAS {103}, 10929 (2006)). The addition of photoinduced control of single-molecule emission allows imaging beyond the diffraction limit (super-resolution) and a new array of acronyms (PALM, STORM, F-PALM etc.) and advances have appeared. We have used the native blinking and switching of a common yellow-emitting variant of green fluorescent protein (EYFP) reported more than a decade ago (Nature {388}, 355 (1997)) to achieve sub-40 nm super-resolution imaging of several protein structures in the bacterium Caulobacter crescentus: the quasi-helix of the actin-like protein MreB (Nat. Meth. {5}, 947 (2008)), the cellular distribution of the DNA binding protein HU (submitted), and the recently discovered division spindle composed of ParA filaments (Nat. Cell Biol. {12}, 791 (2010)). Even with these advances, better emitters would provide more photons and improved resolution, and a new photoactivatable small-molecule emitter has recently been synthesized and targeted to specific structures in living cells to provide super-resolution images (JACS {132}, 15099 (2010)). Finally, a new optical method for extracting three-dimensional position information based on a double-helix point spread function enables quantitative tracking of single mRNA particles in living yeast cells with 15 ms time resolution and 25-50 nm spatial precision (PNAS {107}, 17864 (2010)). These examples illustrate the power of single-molecule optical imaging in extracting new structural and functional information in living cells.
NASA Astrophysics Data System (ADS)
Vemuri, SH. S.; Bosworth, R.; Morrison, J. F.; Kerrigan, E. C.
2018-05-01
The growth of Tollmien-Schlichting (TS) waves is experimentally attenuated using a single-input and single-output (SISO) feedback system, where the TS wave packet is generated by a surface point source in a flat-plate boundary layer. The SISO system consists of a single wall-mounted hot wire as the sensor and a miniature speaker as the actuator. The actuation is achieved through a dual-slot geometry to minimize the cavity near-field effects on the sensor. The experimental setup to generate TS waves or wave packets is very similar to that used by Li and Gaster [J. Fluid Mech. 550, 185 (2006), 10.1017/S0022112005008219]. The aim is to investigate the performance of the SISO control system in attenuating single-frequency, two-dimensional disturbances generated by these configurations. The necessary plant models are obtained using system identification, and the controllers are then designed based on the models and implemented in real-time to test their performance. Cancellation of the rms streamwise velocity fluctuation of TS waves is evident over a significant domain.
Compressive sensing for single-shot two-dimensional coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, E.; Spencer, A.; Spokoyny, B.
2017-02-01
In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.
NASA Astrophysics Data System (ADS)
Kuhlmann, Andreas V.; Houel, Julien; Brunner, Daniel; Ludwig, Arne; Reuter, Dirk; Wieck, Andreas D.; Warburton, Richard J.
2013-07-01
Optically active quantum dots, for instance self-assembled InGaAs quantum dots, are potentially excellent single photon sources. The fidelity of the single photons is much improved using resonant rather than non-resonant excitation. With resonant excitation, the challenge is to distinguish between resonance fluorescence and scattered laser light. We have met this challenge by creating a polarization-based dark-field microscope to measure the resonance fluorescence from a single quantum dot at low temperature. We achieve a suppression of the scattered laser exceeding a factor of 107 and background-free detection of resonance fluorescence. The same optical setup operates over the entire quantum dot emission range (920-980 nm) and also in high magnetic fields. The major development is the outstanding long-term stability: once the dark-field point has been established, the microscope operates for days without alignment. The mechanical and optical designs of the microscope are presented, as well as exemplary resonance fluorescence spectroscopy results on individual quantum dots to underline the microscope's excellent performance.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
NASA Astrophysics Data System (ADS)
Aguirre, Paula; Lindner, Robert R.; Baker, Andrew J.; Bond, J. Richard; Dünner, Rolando; Galaz, Gaspar; Gallardo, Patricio; Hilton, Matt; Hughes, John P.; Infante, Leopoldo; Lima, Marcos; Menten, Karl M.; Sievers, Jonathan; Weiss, Axel; Wollack, Edward J.
2018-03-01
We present a multiwavelength analysis of 48 submillimeter galaxies (SMGs) detected in the Large APEX Bolometer Camera/Atacama Cosmology Telescope (ACT) Survey of Clusters at All Redshifts, LASCAR, which acquired new 870 μm and Australia Telescope Compact Array 2.1 GHz observations of 10 galaxy clusters detected through their Sunyaev–Zel’dovich effect (SZE) signal by the ACT. Far-infrared observations were also conducted with the Photodetector Array Camera and Spectrometer (100/160 μm) and SPIRE (250/350/500 μm) instruments on Herschel for sample subsets of five and six clusters. LASCAR 870 μm maps were reduced using a multiscale iterative pipeline that removes the SZE increment signal, yielding point-source sensitivities of σ ∼ 2 mJy beam‑1. We detect in total 49 sources at the 4σ level and conduct a detailed multiwavelength analysis considering our new radio and far-IR observations plus existing near-IR and optical data. One source is identified as a foreground galaxy, 28 SMGs are matched to single radio sources, four have double radio counterparts, and 16 are undetected at 2.1 GHz but tentatively associated in some cases to near-IR/optical sources. We estimate photometric redshifts for 34 sources with secure (25) and tentative (9) matches at different wavelengths, obtaining a median z={2.8}-1.7+2.1. Compared to previous results for single-dish surveys, our redshift distribution has a comparatively larger fraction of sources at z > 3, and the high-redshift tail is more extended. This is consistent with millimeter spectroscopic confirmation of a growing number of high-z SMGs and relevant for testing of cosmological models. Analytical lens modeling is applied to estimate magnification factors for 42 SMGs at clustercentric radii >1.‧2 with the demagnified flux densities and source-plane areas, we obtain integral number counts that agree with previous submillimeter surveys.
Spatially-resolved probing of biological phantoms by point-radiance spectroscopy
NASA Astrophysics Data System (ADS)
Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.
2011-03-01
Interstitial fiber-optic based strategies for therapy monitoring and assessment rely on detecting treatment-induced changes in the light distribution in biological tissues. We present an optical technique to identify spectrally and spatially specific tissue chromophores in highly scattering turbid media. Typical optical sensors measure non-directional light intensity (i.e. fluence) and require fiber translation (i.e. 3-5 positions), which is difficult to implement clinically. Point radiance spectroscopy is based on directional light collection (i.e. radiance) at a single point with a side-firing fiber that can be rotated up to 360°. A side firing fiber accepts light within a well-defined solid angle thus potentially providing an improved spatial resolution. Experimental measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43° cleaved fiber (i.e. radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid (i.e. scattering medium). Light was collected at 1-5° increments through 360°-segment. Gold nanoparticles, placed into a 3.5 mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a non-invasive optical modality for prostate cancer monitoring.
The impact of land use on microbial surface water pollution.
Schreiber, Christiane; Rechenburg, Andrea; Rind, Esther; Kistemann, Thomas
2015-03-01
Our knowledge relating to water contamination from point and diffuse sources has increased in recent years and there have been many studies undertaken focusing on effluent from sewage plants or combined sewer overflows. However, there is still only a limited amount of microbial data on non-point sources leading to diffuse pollution of surface waters. In this study, the concentrations of several indicator micro-organisms and pathogens in the upper reaches of a river system were examined over a period of 16 months. In addition to bacteria, diffuse pollution caused by Giardia lamblia and Cryptosporidium spp. was analysed. A single land use type predestined to cause high concentrations of all microbial parameters could not be identified. The influence of different land use types varies between microbial species. The microbial concentration in river water cannot be explained by stable non-point effluent concentrations from different land use types. There is variation in the ranking of the potential of different land use types resulting in surface water contamination with regard to minimum, median and maximum effects. These differences between median and maximum impact indicate that small-scale events like spreading manure substantially influence the general contamination potential of a land use type and may cause increasing micro-organism concentrations in the river water by mobilisation during the next rainfall event. Copyright © 2014 Elsevier GmbH. All rights reserved.
PLATFORM DEFORMATION PHASE CORRECTION FOR THE AMiBA-13 COPLANAR INTERFEROMETER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Yu-Wei; Lin, Kai-Yang; Huang, Yau-De
2013-05-20
We present a new way to solve the platform deformation problem of coplanar interferometers. The platform of a coplanar interferometer can be deformed due to driving forces and gravity. A deformed platform will induce extra components into the geometric delay of each baseline and change the phases of observed visibilities. The reconstructed images will also be diluted due to the errors of the phases. The platform deformations of The Yuan-Tseh Lee Array for Microwave Background Anisotropy (AMiBA) were modeled based on photogrammetry data with about 20 mount pointing positions. We then used the differential optical pointing error between two opticalmore » telescopes to fit the model parameters in the entire horizontal coordinate space. With the platform deformation model, we can predict the errors of the geometric phase delays due to platform deformation with a given azimuth and elevation of the targets and calibrators. After correcting the phases of the radio point sources in the AMiBA interferometric data, we recover 50%-70% flux loss due to phase errors. This allows us to restore more than 90% of a source flux. The method outlined in this work is not only applicable to the correction of deformation for other coplanar telescopes but also to single-dish telescopes with deformation problems. This work also forms the basis of the upcoming science results of AMiBA-13.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnello, A.; et al.
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less
Models of the strongly lensed quasar DES J0408−5354
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnello, A.; et al.
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less
Energy resolution of pulsed neutron beam provided by the ANNRI beamline at the J-PARC/MLF
NASA Astrophysics Data System (ADS)
Kino, K.; Furusaka, M.; Hiraga, F.; Kamiyama, T.; Kiyanagi, Y.; Furutaka, K.; Goko, S.; Hara, K. Y.; Harada, H.; Harada, M.; Hirose, K.; Kai, T.; Kimura, A.; Kin, T.; Kitatani, F.; Koizumi, M.; Maekawa, F.; Meigo, S.; Nakamura, S.; Ooi, M.; Ohta, M.; Oshima, M.; Toh, Y.; Igashira, M.; Katabuchi, T.; Mizumoto, M.; Hori, J.
2014-02-01
We studied the energy resolution of the pulsed neutron beam of the Accurate Neutron-Nucleus Reaction Measurement Instrument (ANNRI) at the Japan Proton Accelerator Research Complex/Materials and Life Science Experimental Facility (J-PARC/MLF). A simulation in the energy region from 0.7 meV to 1 MeV was performed and measurements were made at thermal (0.76-62 meV) and epithermal energies (4.8-410 eV). The neutron energy resolution of ANNRI determined by the time-of-flight technique depends on the time structure of the neutron pulse. We obtained the neutron energy resolution as a function of the neutron energy by the simulation in the two operation modes of the neutron source: double- and single-bunch modes. In double-bunch mode, the resolution deteriorates above about 10 eV because the time structure of the neutron pulse splits into two peaks. The time structures at 13 energy points from measurements in the thermal energy region agree with those of the simulation. In the epithermal energy region, the time structures at 17 energy points were obtained from measurements and agree with those of the simulation. The FWHM values of the time structures by the simulation and measurements were found to be almost consistent. In the single-bunch mode, the energy resolution is better than about 1% between 1 meV and 10 keV at a neutron source operation of 17.5 kW. These results confirm the energy resolution of the pulsed neutron beam produced by the ANNRI beamline.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Habboush, Nawar; Hamid, Laith; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The discretization of the brain and the definition of the Laplacian matrix influence the results of methods based on spatial and spatio-temporal smoothness, since the Laplacian operator is used to define the smoothness based on the neighborhood of each grid point. In this paper, the results of low resolution electromagnetic tomography (LORETA) and the spatiotemporal Kalman filter (STKF) are computed using, first, a greymatter source space with the standard definition of the Laplacian matrix and, second, using a whole-brain source space and a modified definition of the Laplacian matrix. Electroencephalographic (EEG) source imaging results of five inter-ictal spikes from a pre-surgical patient with epilepsy are used to validate the two aforementioned approaches. The results using the whole-brain source space and the modified definition of the Laplacian matrix were concentrated in a single source activation, stable, and concordant with the location of the focal cortical dysplasia (FCD) in the patient's brain compared with the results which use a grey-matter grid and the classical definition of the Laplacian matrix. This proof-of-concept study demonstrates a substantial improvement of source localization with both LORETA and STKF and constitutes a basis for further research in a large population of patients with epilepsy.
Spatial release from masking based on binaural processing for up to six maskers
Yost, William A.
2017-01-01
Spatial Release from Masking (SRM) was measured for identification of a female target word spoken in the presence of male masker words. Target words from a single loudspeaker located at midline were presented when two, four, or six masker words were presented either from the same source as the target or from spatially separated masker sources. All masker words were presented from loudspeakers located symmetrically around the centered target source in the front azimuth hemifield. Three masking conditions were employed: speech-in-speech masking (involving both informational and energetic masking), speech-in-noise masking (involving energetic masking), and filtered speech-in-filtered speech masking (involving informational masking). Psychophysical results were summarized as three-point psychometric functions relating proportion of correct word identification to target-to-masker ratio (in decibels) for both the co-located and spatially separated target and masker sources cases. SRM was then calculated by comparing the slopes and intercepts of these functions. SRM decreased as the number of symmetrically placed masker sources increased from two to six. This decrease was independent of the type of masking, with almost no SRM measured for six masker sources. These results suggest that when SRM is dependent primarily on binaural processing, SRM is effectively limited to fewer than six sound sources. PMID:28372135
Yoon, Se Jin; Noh, Si Cheol; Choi, Heung Ho
2007-01-01
The infrared diagnosis device provides two-dimensional images and patient-oriented results that can be easily understood by the inspection target by using infrared cameras; however, it has disadvantages such as large size, high price, and inconvenient maintenance. In this regard, this study has proposed small-sized diagnosis device for body heat using a single infrared sensor and implemented an infrared detection system using a single infrared sensor and an algorithm that represents thermography using the obtained data on the temperature of the point source. The developed systems had the temperature resolution of 0.1 degree and the reproducibility of +/-0.1 degree. The accuracy was 90.39% at the error bound of +/-0 degree and 99.98% at that of +/-0.1 degree. In order to evaluate the proposed algorithm and system, the infrared images of camera method was compared. The thermal images that have clinical meaning were obtained from a patient who has lesion to verify its clinical applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhlmann, Andreas V.; Houel, Julien; Warburton, Richard J.
Optically active quantum dots, for instance self-assembled InGaAs quantum dots, are potentially excellent single photon sources. The fidelity of the single photons is much improved using resonant rather than non-resonant excitation. With resonant excitation, the challenge is to distinguish between resonance fluorescence and scattered laser light. We have met this challenge by creating a polarization-based dark-field microscope to measure the resonance fluorescence from a single quantum dot at low temperature. We achieve a suppression of the scattered laser exceeding a factor of 10{sup 7} and background-free detection of resonance fluorescence. The same optical setup operates over the entire quantum dotmore » emission range (920–980 nm) and also in high magnetic fields. The major development is the outstanding long-term stability: once the dark-field point has been established, the microscope operates for days without alignment. The mechanical and optical designs of the microscope are presented, as well as exemplary resonance fluorescence spectroscopy results on individual quantum dots to underline the microscope's excellent performance.« less
Ultra-wideband horn antenna with abrupt radiator
McEwan, Thomas E.
1998-01-01
An ultra-wideband horn antenna transmits and receives impulse waveforms for short-range radars and impulse time-of flight systems. The antenna reduces or eliminates various sources of close-in radar clutter, including pulse dispersion and ringing, sidelobe clutter, and feedline coupling into the antenna. Dispersion is minimized with an abrupt launch point radiator element; sidelobe and feedline coupling are minimized by recessing the radiator into a metallic horn. Low frequency cut-off associated with a horn is extended by configuring the radiator drive impedance to approach a short circuit at low frequencies. A tapered feed plate connects at one end to a feedline, and at the other end to a launcher plate which is mounted to an inside wall of the horn. The launcher plate and feed plate join at an abrupt edge which forms the single launch point of the antenna.
Magnetic Topology of Coronal Hole Linkages
NASA Technical Reports Server (NTRS)
Titov, V. S.; Mikic, Z.; Linker, J. A.; Lionello, R.; Antiochos, S. K.
2010-01-01
In recent work, Antiochos and coworkers argued that the boundary between the open and closed field regions on the Sun can be extremely complex with narrow corridors of open ux connecting seemingly disconnected coronal holes from the main polar holes, and that these corridors may be the sources of the slow solar wind. We examine, in detail, the topology of such magnetic configurations using an analytical source surface model that allows for analysis of the eld with arbitrary resolution. Our analysis reveals three important new results: First, a coronal hole boundary can join stably to the separatrix boundary of a parasitic polarity region. Second, a single parasitic polarity region can produce multiple null points in the corona and, more important, separator lines connecting these points. Such topologies are extremely favorable for magnetic reconnection, because it can now occur over the entire length of the separators rather than being con ned to a small region around the nulls. Finally, the coronal holes are not connected by an open- eld corridor of finite width, but instead are linked by a singular line that coincides with the separatrix footprint of the parasitic polarity. We investigate how the topological features described above evolve in response to motion of the parasitic polarity region. The implications of our results for the sources of the slow solar wind and for coronal and heliospheric observations are discussed.
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
Quantitative PET and SPECT performance characteristics of the Albira Trimodal pre-clinical tomograph
NASA Astrophysics Data System (ADS)
Spinks, T. J.; Karia, D.; Leach, M. O.; Flux, G.
2014-02-01
The Albira Trimodal pre-clinical scanner comprises PET, SPECT and CT sub-systems and thus provides a range of pre-clinical imaging options. The PET component consists of three rings of single-crystal LYSO detectors with axial/transverse fields-of-view (FOVs) of 148/80 mm. The SPECT component has two opposing CsI detectors (100 × 100 mm2) with single-pinhole (SPH) or multi(9)-pinhole (MPH) collimators; the detectors rotate in 6° increments and their spacing can be adjusted to provide different FOVs (25 to 120 mm). The CT sub-system provides ‘low’ (200 µA, 35 kVp) or ‘high’ (400 µA, 45 kVp) power x-rays onto a flat-panel CsI detector. This study examines the performance characteristics and quantitative accuracy of the PET and SPECT components. Using the NEMA NU 4-2008 specifications (22Na point source), the PET spatial resolution is 1.5 + 0.1 mm on axis and sensitivity 6.3% (axial centre) and 4.6% (central 70 mm). The usable activity range is ≤ 10 MBq (18F) over which good linearity (within 5%) is obtained for a uniform cylinder spanning the axial FOV; increasing deviation from linearity with activity is, however, observed for the NEMA (mouse) line source phantom. Image uniformity axially is within 5%. Spatial resolution (SPH/MPH) for the minimum SPECT FOV used for mouse imaging (50 mm) is 1.5/1.7 mm and point source sensitivity 69/750 cps MBq-1. Axial uniformity of SPECT images (%CV of regions-of-interest counts along the axis) is mostly within 8% although there is a range of 30-40% for the largest FOV. The variation is significantly smaller within the central 40 mm. Instances of count rate nonlinearity (PET) and axial non-uniformity (SPECT) were found to be reproducible and thus amenable to empirical correction.
Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation
Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem
2017-01-01
Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348
SU-E-T-149: Brachytherapy Patient Specific Quality Assurance for a HDR Vaginal Cylinder Case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbiere, J; Napoli, J; Ndlovu, A
2015-06-15
Purpose: Commonly Ir-192 HDR treatment planning system commissioning is only based on a single absolute measurement of source activity supplemented by tabulated parameters for multiple factors without independent verification that the planned distribution corresponds to the actual delivered dose. The purpose on this work is to present a methodology using Gafchromic film with a statistically valid calibration curve that can be used to validate clinical HDR vaginal cylinder cases by comparing the calculated plan dose distribution in a plane with the corresponding measured planar dose. Methods: A vaginal cylinder plan was created with Oncentra treatment planning system. The 3D dosemore » matrix was exported to a Varian Eclipse work station for convenient extraction of a 2D coronal dose plane corresponding to the film position. The plan was delivered with a sheet of Gafchromic EBT3 film positioned 1mm from the catheter using an Ir-192 Nucletron HDR source. The film was then digitized with an Epson 10000 XL color scanner. Film analysis is performed with MatLab imaging toolbox. A density to dose calibration curve was created using TG43 formalism for a single dwell position exposure at over 100 points for statistical accuracy. The plan and measured film dose planes were registered using a known dwell position relative to four film marks. The plan delivered 500 cGy to points 2 cm from the sources. Results: The distance to agreement of the 500 cGy isodose between the plan and film measurement laterally was 0.5 mm but can be as much as 1.5 mm superior and inferior. The difference between the computed plan dose and film measurement was calculated per pixel. The greatest errors up to 50 cGy are near the apex. Conclusion: The methodology presented will be useful to implement more comprehensive quality assurance to verify patient-specific dose distributions.« less
NASA Astrophysics Data System (ADS)
Saito, R. K.; Minniti, D.; Dias, B.; Hempel, M.; Rejkuba, M.; Alonso-García, J.; Barbuy, B.; Catelan, M.; Emerson, J. P.; Gonzalez, O. A.; Lucas, P. W.; Zoccali, M.
2012-08-01
Context. The Milky Way (MW) bulge is a fundamental Galactic component for understanding the formation and evolution of galaxies, in particular our own. The ESO Public Survey VISTA Variables in the Vía Láctea is a deep near-IR survey mapping the Galactic bulge and southern plane. Particularly for the bulge area, VVV is covering ~315 deg2. Data taken during 2010 and 2011 covered the entire bulge area in the JHKs bands. Aims: We used VVV data for the whole bulge area as a single and homogeneous data set to build for the first time a single colour - magnitude diagram (CMD) for the entire Galactic bulge. Methods: Photometric data in the JHKs bands were combined to produce a single and huge data set containing 173 150 467 sources in the three bands, for the ~315 deg2 covered by VVV in the bulge. Selecting only the data points flagged as stellar, the total number of sources is 84 095 284. Results: We built the largest colour-magnitude diagrams published up to date, containing 173.1+ million sources for all data points, and more than 84.0 million sources accounting for the stellar sources only. The CMD has a complex shape, mostly owing to the complexity of the stellar population and the effects of extinction and reddening towards the Galactic centre. The red clump (RC) giants are seen double in magnitude at b ~ -8° -10°, while in the inner part (b ~ -3°) they appear to be spreading in colour, or even splitting into a secondary peak. Stellar population models show the predominance of main-sequence and giant stars. The analysis of the outermost bulge area reveals a well-defined sequence of late K and M dwarfs, seen at (J - Ks) ~ 0.7-0.9 mag and Ks ≳ 14 mag. Conclusions: The interpretation of the CMD yields important information about the MW bulge, showing the fingerprint of its structure and content. We report a well-defined red dwarf sequence in the outermost bulge, which is important for the planetary transit searches of VVV. The double RC in magnitude seen in the outer bulge is the signature of the X-shaped MW bulge, while the spreading of the RC in colour, and even its splitting into a secondary peak, are caused by reddening effects. The region around the Galactic centre is harder to interpret because it is strongly affected by reddening and extinction. Based on observations taken within the ESO VISTA Public Survey VVV, Programme ID 179.B-2002.The VVV survey data are available through the ESO archive http://www.eso.org/sci/archive.html
Halfon, Philippe; Ouzan, Denis; Khiri, Hacène; Pénaranda, Guillaume; Castellani, Paul; Oulès, Valerie; Kahloun, Asma; Amrani, Nolwenn; Fanteria, Lise; Martineau, Agnès; Naldi, Lou; Bourlière, Marc
2012-01-01
Background & Aims Point mutations in the coding region of the interleukin 28 gene (rs12979860) have recently been identified for predicting the outcome of treatment of hepatitis C virus infection. This polymorphism detection was based on whole blood DNA extraction. Alternatively, DNA for genetic diagnosis has been derived from buccal epithelial cells (BEC), dried blood spots (DBS), and genomic DNA from serum. The aim of the study was to investigate the reliability and accuracy of alternative routes of testing for single nucleotide polymorphism allele rs12979860CC. Methods Blood, plasma, and sera samples from 200 patients were extracted (400 µL). Buccal smears were tested using an FTA card. To simulate postal delay, we tested the influence of storage at ambient temperature on the different sources of DNA at five time points (baseline, 48 h, 6 days, 9 days, and 12 days) Results There was 100% concordance between blood, plasma, sera, and BEC, validating the use of DNA extracted from BEC collected on cytology brushes for genetic testing. Genetic variations in HPTR1 gene were detected using smear technique in blood smear (3620 copies) as well as in buccal smears (5870 copies). These results are similar to those for whole blood diluted at 1/10. A minimum of 0.04 µL, 4 µL, and 40 µL was necessary to obtain exploitable results respectively for whole blood, sera, and plasma. No significant variation between each time point was observed for the different sources of DNA. IL28B SNPs analysis at these different time points showed the same results using the four sources of DNA. Conclusion We demonstrated that genomic DNA extraction from buccal cells, small amounts of serum, and dried blood spots is an alternative to DNA extracted from peripheral blood cells and is helpful in retrospective and prospective studies for multiple genetic markers, specifically in hard-to-reach individuals. PMID:22412970
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
Quantitative assessment of dynamic PET imaging data in cancer imaging.
Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E
2012-11-01
Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.
Control of Disturbing Loads in Residential and Commercial Buildings via Geometric Algebra
2013-01-01
Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ~ associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach. PMID:24260017
Control of disturbing loads in residential and commercial buildings via geometric algebra.
Castilla, Manuel-V
2013-01-01
Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ(~) associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach.
Ghost imaging with bucket detection and point detection
NASA Astrophysics Data System (ADS)
Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao
2018-04-01
We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
40 CFR 29.9 - How does the Administrator receive and respond to comments?
Code of Federal Regulations, 2010 CFR
2010-07-01
... State office or official is designated to act as a single point of contact between a State process and... program selected under § 29.6. (b) The single point of contact is not obligated to transmit comments from.... However, if a State process recommendation is transmitted by a single point of contact, all comments from...
Quantifying the errors due to the superposition of analytical deformation sources
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Pascal, K.
2012-04-01
The displacement field due to magma movement in the subsurface is often modelled using a Mogi point source or a dislocation Okada source embedded in a homogeneous elastic half-space. When the magmatic system cannot be modelled by a single source it is often represented by several sources, their respective deformation fields are superimposed. However, in such a case the assumption of homogeneity in the half-space is violated and the interaction between sources in an elastic medium is neglected. In this investigation we have quantified the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or dislocation of the sources and their relative position. We also investigated three numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. The errors induced when neglecting the source interaction are expected to vary greatly with the physical and geometrical parameters of the model. We demonstrated that for certain scenarios these discrepancies can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
Ohminato, T.; Chouet, B.A.; Dawson, P.; Kedar, S.
1998-01-01
We use data from broadband seismometers deployed around the summit of Kilauea Volcano to quantify the mechanism associated with a transient in the flow of magma feeding the east rift eruption of the volcano. The transient is marked by rapid inflation of the Kilauea summit peaking at 22 ??rad 4.5 hours after the event onset, followed by slow deflation over a period of 3 days. Superimposed on the summit inflation is a series of sawtooth displacement pulses, each characterized by a sudden drop in amplitude lasting 5-10 s followed by an exponential recovery lasting 1-3 min. The sawtooth waveforms display almost identical shapes, suggesting a process involving the repeated activation of a fixed source. The particle motion associated with each sawtooth is almost linear, and its major swing shows compressional motion at all stations. Analyses of semblance and particle motion are consistent with a point source located 1 km beneath the northeast edge of the Halemaumau pit crater. To estimate the source mechanism, we apply a moment tensor inversion to the waveform data, assuming a point source embedded in a homogeneous half-space with compressional and shear wave velocities representative of the average medium properties at shallow depth under Kilauea. Synthetic waveforms are constructed by a superposition of impulse responses for six moment tensor components and three single force components. The origin times of individual impulses are distributed along the time axis at appropriately small, equal intervals, and their amplitudes are determined by least squares. In this inversion, the source time functions of the six tensor and three force components are determined simultaneously. We confirm the accuracy of the inversion method through a series of numerical tests. The results from the inversion show that the waveform data are well explained by a pulsating transport mechanism operating on a subhorizontal crack linking the summit reservoir to the east rift of Kilauea. The crack acts like a buffer in which a batch of fluid (magma and/or gas) accumulates over a period of 1-3 min before being rapidly injected into a larger reservoir (possibly the east rift) over a timescale of 5-10 s. The seismic moment and volume change associated with a typical batch of fluid are approximately 1014 N m and 3000 m3, respectively. Our results also point to the existence of a single force component with amplitude of 109 N, which may be explained as the drag force generated by the flow of viscous magma through a narrow constriction in the flow path. The total volume of magma associated with the 4.5-hour-long activation of the pulsating source is roughly 500,000 m3 in good agreement with the integrated volume flow rate of magma estimated near the eruptive site.
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
Elementary Theoretical Forms for the Spatial Power Spectrum of Earth's Crustal Magnetic Field
NASA Technical Reports Server (NTRS)
Voorhies, C.
1998-01-01
The magnetic field produced by magnetization in Earth's crust and lithosphere can be distinguished from the field produced by electric currents in Earth's core because the spatial magnetic power spectrum of the crustal field differs from that of the core field. Theoretical forms for the spectrum of the crustal field are derived by treating each magnetic domain in the crust as the point source of a dipole field. The geologic null-hypothesis that such moments are uncorrelated is used to obtain the magnetic spectrum expected from a randomly magnetized, or unstructured, spherical crust of negligible thickness. This simplest spectral form is modified to allow for uniform crustal thickness, ellipsoidality, and the polarization of domains by an periodically reversing, geocentric axial dipole field from Earth's core. Such spectra are intended to describe the background crustal field. Magnetic anomalies due to correlated magnetization within coherent geologic structures may well be superimposed upon this background; yet representing each such anomaly with a single point dipole may lead to similar spectral forms. Results from attempts to fit these forms to observational spectra, determined via spherical harmonic analysis of MAGSAT data, are summarized in terms of amplitude, source depth, and misfit. Each theoretical spectrum reduces to a source factor multiplied by the usual exponential function of spherical harmonic degree n due to geometric attenuation with attitude above the source layer. The source factors always vary with n and are approximately proportional to n(exp 3) for degrees 12 through 120. The theoretical spectra are therefore not directly proportional to an exponential function of spherical harmonic degree n. There is no radius at which these spectra are flat, level, or otherwise independent of n.
NASA Astrophysics Data System (ADS)
Cowperthwaite, P. S.; Berger, E.; Rest, A.; Chornock, R.; Scolnic, D. M.; Williams, P. K. G.; Fong, W.; Drout, M. R.; Foley, R. J.; Margutti, R.; Lunnan, R.; Metzger, B. D.; Quataert, E.
2018-05-01
We present an empirical study of contamination in wide-field optical follow-up searches of gravitational wave sources from Advanced LIGO/Virgo using dedicated observations with the Dark Energy Camera. Our search covered ∼56 deg2, with two visits per night, in the i and z bands, followed by an additional set of griz images three weeks later to serve as reference images for subtraction. We achieve 5σ point-source limiting magnitudes of i ≈ 23.5 and z ≈ 22.4 mag in the coadded single-epoch images. We conduct a search for transient objects that mimic the i ‑ z color behavior of both red (i‑z > 0.5 mag) and blue (i‑z < 0 mag) kilonova emission, finding 11 and 10 contaminants, respectively. Independent of color, we identify 48 transients of interest. Additionally, we leverage the rapid cadence of our observations to search for sources with characteristic timescales of ≈1 day and ≈3 hr, finding no potential contaminants. We assess the efficiency of our search with injected point sources, finding that we are 90% (60%) efficient when searching for red (blue) kilonova-like sources to a limiting magnitude of i ≲ 22.5 mag. Using our efficiencies, we derive sky rates for kilonova contaminants of {{ \\mathcal R }}red} ≈ 0.16 deg‑2 and {{ \\mathcal R }}blue}≈ 0.80 deg‑2. The total contamination rate is {{ \\mathcal R }}all}≈ 1.79 deg‑2. We compare our results to previous optical follow-up efforts and comment on the outlook for gravitational wave follow-up searches as additional detectors (e.g., KAGRA, LIGO India) come online in the next decade.
Detecting small scale CO2 emission structures using OCO-2
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Eldering, Annmarie; Verhulst, Kristal R.; Miller, Charles E.; Nguyen, Hai M.; Oda, Tomohiro; O'Dell, Christopher; Rao, Preeti; Kahn, Brian; Crisp, David; Gunson, Michael R.; Sanchez, Robert M.; Ashok, Manasa; Pieri, David; Linick, Justin P.; Yuen, Karen
2016-04-01
Localized carbon dioxide (CO2) emission structures cover spatial domains of less than 50 km diameter and include cities and transportation networks, as well as fossil fuel production, upgrading and distribution infra-structure. Anthropogenic sources increasingly upset the natural balance between natural carbon sources and sinks. Mitigation of resulting climate change impacts requires management of emissions, and emissions management requires monitoring, reporting and verification. Space-borne measurements provide a unique opportunity to detect, quantify, and analyze small scale and point source emissions on a global scale. NASA's first satellite dedicated to atmospheric CO2 observation, the July 2014 launched Orbiting Carbon Observatory (OCO-2), now leads the afternoon constellation of satellites (A-Train). Its continuous swath of 2 to 10 km in width and eight footprints across can slice through coincident emission plumes and may provide momentary cross sections. First OCO-2 results demonstrate that we can detect localized source signals in the form of urban total column averaged CO2 enhancements of ~2 ppm against suburban and rural backgrounds. OCO-2's multi-sounding swath observing geometry reveals intra-urban spatial structures reflected in XCO2 data, previously unobserved from space. The transition from single-shot GOSAT soundings detecting urban/rural differences (Kort et al., 2012) to hundreds of soundings per OCO-2 swath opens up the path to future capabilities enabling urban tomography of greenhouse gases. For singular point sources like coal fired power plants, we have developed proxy detections of plumes using bands of imaging spectrometers with sensitivity to SO2 in the thermal infrared (ASTER). This approach provides a means to automate plume detection with subsequent matching and mining of OCO-2 data for enhanced detection efficiency and validation. © California Institute of Technology
MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID
Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
Aryal, P; Molloy, J
2012-06-01
To show the effect of gold backing on dose rates for the USC #9 radioactive eye plaque. An I125 source (IsoAid model IAI-125A) and gold backing was modeled using MCNP5 Monte Carlo code. A single iodine seed was simulated with and without gold backing. Dose rates were calculated in two orthogonal planes. Dose calculation points were structured in two orthogonal planes that bisect the center of the source. A 2×2 cm matrix of spherical points of radius 0.2 mm was created in a water phantom of 10 cm radius. 0.2 billion particle histories were tracked. Dose differences with and without the gold backing were analyzed using Matlab. The gold backing produced a 3% increase in the dose rate near the source surface (<1mm) relative to that without the backing. This was presumably caused by fluorescent photons from the gold. At distances between 1 and 2 cm, the gold backing reduced the dose rate by up to 12%, which we attribute to a lack of scatter resulting from the attenuation from the gold. Dose differences were most pronounced in the radial direction near the source center but off axis. The dose decreased by 25%, 65% and 81% at 1, 2, and 3 mm off axis at a distance of 1 mm from the source surface. These effects were less pronounced in the perpendicular dimension near the source tip, where maximum dose decreases of 2% were noted. I 125 sources embedded directly into gold troughs display dose differences of 2 - 90%, relative to doses without the gold backing. This is relevant for certain types of plaques used in treatment of ocular melanoma. Large dose reductions can be observed and may have implications for scleral dose reduction. © 2012 American Association of Physicists in Medicine.
Alfonse, Lauren E; Garrett, Amanda D; Lun, Desmond S; Duffy, Ken R; Grgicak, Catherine M
2018-01-01
DNA-based human identity testing is conducted by comparison of PCR-amplified polymorphic Short Tandem Repeat (STR) motifs from a known source with the STR profiles obtained from uncertain sources. Samples such as those found at crime scenes often result in signal that is a composite of incomplete STR profiles from an unknown number of unknown contributors, making interpretation an arduous task. To facilitate advancement in STR interpretation challenges we provide over 25,000 multiplex STR profiles produced from one to five known individuals at target levels ranging from one to 160 copies of DNA. The data, generated under 144 laboratory conditions, are classified by total copy number and contributor proportions. For the 70% of samples that were synthetically compromised, we report the level of DNA damage using quantitative and end-point PCR. In addition, we characterize the complexity of the signal by exploring the number of detected alleles in each profile. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaleil, A.; Le Flanchec, V.; Binet, A.; Nègre, J. P.; Devaux, J. F.; Jacob, V.; Millerioux, M.; Bayle, A.; Balleyguier, P.; Prazeres, R.
2016-12-01
An inverse Compton scattering source is under development at the ELSA linac of CEA, Bruyères-le-Châtel. Ultra-short X-ray pulses are produced by inverse Compton scattering of 30 ps-laser pulses by relativistic electron bunches. The source will be able to operate in single shot mode as well as in recurrent mode with 72.2 MHz pulse trains. Within this framework, an optical multipass system that multiplies the number of emitted X-ray photons in both regimes has been designed in 2014, then implemented and tested on ELSA facility in the course of 2015. The device is described from both geometrical and timing viewpoints. It is based on the idea of folding the laser optical path to pile-up laser pulses at the interaction point, thus increasing the interaction probability. The X-ray output gain measurements obtained using this system are presented and compared with calculated expectations.
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
Working Memory Capacity as a Dynamic Process
Simmering, Vanessa R.; Perone, Sammy
2013-01-01
A well-known characteristic of working memory (WM) is its limited capacity. The source of such limitations, however, is a continued point of debate. Developmental research is positioned to address this debate by jointly identifying the source(s) of limitations and the mechanism(s) underlying capacity increases. Here we provide a cross-domain survey of studies and theories of WM capacity development, which reveals a complex picture: dozens of studies from 50 papers show nearly universal increases in capacity estimates with age, but marked variation across studies, tasks, and domains. We argue that the full pattern of performance cannot be captured through traditional approaches emphasizing single causes, or even multiple separable causes, underlying capacity development. Rather, we consider WM capacity as a dynamic process that emerges from a unified cognitive system flexibly adapting to the context and demands of each task. We conclude by enumerating specific challenges for researchers and theorists that will need to be met in order to move our understanding forward. PMID:23335902
Detectors Requirements for the ODIN Beamline at ESS
NASA Astrophysics Data System (ADS)
Morgano, Manuel; Lehmann, Eberhard; Strobl, Markus
The upcoming high intensity pulsed spallationneutron source ESS, now in construction in Sweden, will provide unprecedented opportunities for neutron science worldwide. In particular, neutron imaging will benefit from the time structure of the source and its high brilliance. These features will unlock new opportunities at the imaging beamline ODIN, but only if suitable detectors are employed and, in some cases, upgraded. In this paper, we highlight the current state-of-the-art for neutron imaging detectors, pointing out that, while no single presently existing detector can fulfill all the requirements currently needed to exploit the source to its limits, the wide range of applications of ODIN can be successfully covered by a suite of current state-of-the-art detectors. Furthermore we speculate on improvements to the current detector technologies that would expand the range of the existing detectors and application range and we outline a strategy to have the best possible combined system for the foreseen day 1 operations of ODIN in 2019.
Microseismicity of Blawan hydrothermal complex, Bondowoso, East Java, Indonesia
NASA Astrophysics Data System (ADS)
Maryanto, S.
2018-03-01
Peak Ground Acceleration (PGA), hypocentre, and epicentre of Blawan hydrothermal complex have been analysed in order to investigate its seismicity. PGA has been determined based on Fukushima-Tanaka method and the source location of microseismic estimated using particle motion method. PGA ranged between 0.095-0.323 g and tends to be higher in the formation that containing not compacted rocks. The seismic vulnerability index region indicated that the zone with high PGA also has a high seismic vulnerability index. This was because the rocks making up these zones were inclined soft and low-density rocks. For seismic sources around the area, epicentre and hypocentre, have estimated base on seismic particle motion method of single station. The stations used in this study were mobile stations identified as BL01, BL02, BL03, BL05, BL06, BL07 and BL08. The results of the analysis particle motion obtained 44 points epicentre and the depth of the sources about 15 – 110 meters below ground surface.
Evidence for a Population of High-Redshift Submillimeter Galaxies from Interferometric Imaging
NASA Astrophysics Data System (ADS)
Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Lai, Kamson; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Iono, Daisuke; Kohno, Kotaro; Kawabe, Ryohei; Hughes, David H.; Aretxaga, Itziar; Webb, Tracy; Martínez-Sansigre, Alejo; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.; Schinnerer, Eva; Smolčić, Vernesa
2007-12-01
We have used the Submillimeter Array to image a flux-limited sample of seven submillimeter galaxies, selected by the AzTEC camera on the JCMT at 1.1 mm, in the COSMOS field at 890 μm with ~2" resolution. All of the sources-two radio-bright and five radio-dim-are detected as single point sources at high significance (>6 σ), with positions accurate to ~0.2" that enable counterpart identification at other wavelengths observed with similarly high angular resolution. All seven have IRAC counterparts, but only two have secure counterparts in deep HST ACS imaging. As compared to the two radio-bright sources in the sample, and those in previous studies, the five radio-dim sources in the sample (1) have systematically higher submillimeter-to-radio flux ratios, (2) have lower IRAC 3.6-8.0 μm fluxes, and (3) are not detected at 24 μm. These properties, combined with size constraints at 890 μm (θ<~1.2''), suggest that the radio-dim submillimeter galaxies represent a population of very dusty starbursts, with physical scales similar to local ultraluminous infrared galaxies, with an average redshift higher than radio-bright sources.
Information Foraging for Perceptual Decisions
2016-01-01
We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. PMID:27819455
Generation and Radiation of Acoustic Waves from a 2D Shear Layer
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2000-01-01
A thin free shear layer containing an inflection point in the mean velocity profile is inherently unstable. Disturbances in the flow field can excite the unstable behavior of a shear layer, if the appropriate combination of frequencies and shear layer thicknesses exists, causing instability waves to grow. For other combinations of frequencies and thicknesses, these instability waves remain neutral in amplitude or decay in the downstream direction. A growing instability wave radiates noise when its phase velocity becomes supersonic relative to the ambient speed of sound. This occurs primarily when the mean jet flow velocity is supersonic. Thus, the small disturbances in the flow, which themselves may generate noise, have generated an additional noise source. It is the purpose of this problem to test the ability of CAA to compute this additional source of noise. The problem is idealized such that the exciting disturbance is a fixed known acoustic source pulsating at a single frequency. The source is placed inside of a 2D jet with parallel flow; hence, the shear layer thickness is constant. With the source amplitude small enough, the problem is governed by the following set of linear equations given in dimensional form.
Laser Sources for Generation of Ultrasound
NASA Technical Reports Server (NTRS)
Wagner, James W.
1996-01-01
Two laser systems have been built and used to demonstrate enhancements beyond current technology used for laser-based generation and detection of ultrasound. The first system consisted of ten Nd:YAG laser cavities coupled electronically and optically to permit sequential bursts of up to ten laser pulses directed either at a single point or configured into a phased array of sources. Significant enhancements in overall signal-to-noise ratio for laser ultrasound incorporating this new source system was demonstrated, using it first as a source of narrowband ultrasound and secondly as a phased array source producing large enhanced signal displacements. A second laser system was implemented using ultra fast optical pulses from a Ti:Sapphire laser to study a new method for making laser generated ultrasonic measurements of thin films with thicknesses on the order of hundreds of angstroms. Work by prior investigators showed that such measurements could be made based upon fluctuations in the reflectivity of thin films when they are stressed by an arriving elastic pulse. Research performed using equipment purchased under this program showed that a pulsed interferometric system could be used as well as a piezoreflective detection system to measure pulse arrivals even in thin films with very low piezoreflective coefficients.
Nitrate concentrations under irrigated agriculture
Zaporozec, A.
1983-01-01
In recent years, considerable interest has been expressed in the nitrate content of water supplies. The most notable toxic effect of nitrate is infant methemoglobinemia. The risk of this disease increases significantly at nitrate-nitrogen levels exceeding 10 mg/l. For this reason, this concentration has been established as a limit for drinking water in many countries. In natural waters, nitrate is a minor ionic constituent and seldom accounts for more than a few percent of the total anions. However, nitrate in a significant concentration may occur in the vicinity of some point sources such as septic tanks, manure pits, and waste-disposal sites. Non-point sources contributing to groundwater pollution are numerous and a majority of them are related to agricultural activities. The largest single anthropogenic input of nitrate into the groundwater is fertilizer. Even though it has not been proven that nitrogen fertilizers are responsible for much of nitrate pollution, they are generally recognized as the main threat to groundwater quality, especially when inefficiently applied to irrigated fields on sandy soils. The biggest challenge facing today's agriculture is to maintain the balance between the enhancement of crop productivity and the risk of groundwater pollution. ?? 1982 Springer-Verlag New York Inc.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Technologies for autonomous integrated lab-on-chip systems for space missions
NASA Astrophysics Data System (ADS)
Nascetti, A.; Caputo, D.; Scipinotti, R.; de Cesare, G.
2016-11-01
Lab-on-chip devices are ideal candidates for use in space missions where experiment automation, system compactness, limited weight and low sample and reagent consumption are required. Currently, however, most microfluidic systems require external desktop instrumentation to operate and interrogate the chip, thus strongly limiting their use as stand-alone systems. In order to overcome the above-mentioned limitations our research group is currently working on the design and fabrication of "true" lab-on-chip systems that integrate in a single device all the analytical steps from the sample preparation to the detection without the need for bulky external components such as pumps, syringes, radiation sources or optical detection systems. Three critical points can be identified to achieve 'true' lab-on-chip devices: sample handling, analytical detection and signal transduction. For each critical point, feasible solutions are presented and evaluated. Proposed microfluidic actuation and control is based on electrowetting on dielectrics, autonomous capillary networks and active valves. Analytical detection based on highly specific chemiluminescent reactions is used to avoid external radiation sources. Finally, the integration on the same chip of thin film sensors based on hydrogenated amorphous silicon is discussed showing practical results achieved in different sensing tasks.
Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.
Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe
2010-01-01
There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
Tracking Helicopters with a Seismic Array
NASA Astrophysics Data System (ADS)
Eibl, Eva P. S.; Lokmer, Ivan; Bean, Christopher J.; Akerlie, Eggert
2015-04-01
We observed that the pressure or acoustic wave created by the rotor blades of a helicopter can couple to the ground even at 30 km distance where it creates a signal strong enough to be detected by a seismometer. The signal is harmonic tremor with a fundamental frequency downgliding with the inflection point at e.g. 14 Hz and two equally spaced overtones up to the Nyquist frequency of 50 Hz. No difference in the amplitudes between the fundamental frequency and higher harmonics was observed. Such a signature is a consequence of the regularly repeating pressure pulses generated by the helicopter's rotor blades. The signal was recorded by a seven station broadband array with an aperture of 1.6 km. Our spacing is close enough to record the signal at all stations and far enough to observe traveltime differences. The separation of the spectral lines corresponds to the time interval between the repeating sources. The highlighted harmonics contain information about the spectral content of the single source as our signal corresponds to the convolution of an infinite comb function and a single pulse. As we see all harmonics and they have the same amplitude up to the Nyquist frequency we can deduce that the frequency content of the single pulse is flat i.e. it is effectively a delta function up to the Nyquist frequency. We perform a detailed spectral and location analysis of the signal, and compare our results with the known information on the helicopter's speed, location, the frequency of the blades rotation and the amount of blades. This analysis is based on the characteristic shape of the curve i.e. speed of the gliding, minimum and maximum fundamental frequency, amplitudes at the inflection points at different stations and traveltimes deduced from the inflection points at different stations. This observation has an educative value, because the same principle could be used for the analysis of the volcanic harmonic tremor. Harmonic volcanic tremor usually has fundamental frequencies below 10 Hz but frequency downgliding and upgliding up to 30 Hz was observed e.g. on Redoubt volcano. Due to the characteristic shape of the helicopter signal it is nevertheless rather unlikely that this signal is mistaken for volcanic tremor. The helicopter gives us a robust way of testing the method and possible application of the method to volcanic harmonic tremor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
X-Pinch And Its Applications In X-ray Radiograph
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Xiaobing; Wang Xinxin; Liu Rui
2009-07-07
An X-pinch device and the related diagnostics of x-ray emission from X-pinch were briefly described. The time-resolved x-ray measurements with photoconducting diodes show that the x-ray pulse usually consists of two subnanosecond peaks with a time interval of about 0.5 ns. Being consistent with these two peaks of the x-ray pulse, two point x-ray sources of size ranging from 100 mum to 5 mum and depending on cut-off x-ray photon energy were usually observed on the pinhole pictures. The x-pinch was used as x-ray source for backlighting of the electrical explosion of single wire and the evolution of X-pinch, andmore » for phase-contrast imaging of soft biological objects such as a small shrimp and a mosquito.« less
Characterization of the acoustic field generated by a horn shaped ultrasonic transducer
NASA Astrophysics Data System (ADS)
Hu, B.; Lerch, J. E.; Chavan, A. H.; Weber, J. K. R.; Tamalonis, A.; Suthar, K. J.; DiChiara, A. D.
2017-09-01
A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analyses. Our results show that this style of transducer produces a strong acoustic beam with a total divergence angle of 10°, a near-field point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.
Characterization of the acoustic field generated by a horn shaped ultrasonic transducer
Hu, B.; Lerch, J. E.; Chavan, A. H.; ...
2017-09-04
A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analysis. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degrees, a nearfield point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.« less
Characterization of the acoustic field generated by a horn shaped ultrasonic transducer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, B.; Lerch, J. E.; Chavan, A. H.
A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analyses. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degree, a near-field point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments« less
Characterization of the acoustic field generated by a horn shaped ultrasonic transducer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, B.; Lerch, J. E.; Chavan, A. H.
A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analysis. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degrees, a nearfield point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.« less
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
NASA Astrophysics Data System (ADS)
Heine, Frank; Schwander, Thomas; Lange, Robert; Smutny, Berry
2006-04-01
Tesat-Spacecom has developed a series of fiber coupled single frequency lasers for space applications ranging from onboard metrology for space borne FTIR spectrometers to step tunable seed lasers for LIDAR applications. The cw-seed laser developed for the ESA AEOLUS Mission shows a 3* 10 -11 Allen variance from 1 sec time intervals up to 1000 sec. Q-switched lasers with stable beam pointing under space environments are another field of development. One important aspect of a space borne laser system is a reliable fiber coupled laser diode pump source around 808nm. A dedicated development concerning chip design and packaging yielded in a 5*10 6h MTTF (mean time to failure) for the broad area emitters. Qualification and performance test results for the different laser assemblies will be presented and their application in the different space programs.
47 CFR 68.105 - Minimum point of entry (MPOE) and demarcation point.
Code of Federal Regulations, 2010 CFR
2010-10-01
... be either the closest practicable point to where the wiring crosses a property line or the closest practicable point to where the wiring enters a multiunit building or buildings. The reasonable and... situations. (c) Single unit installations. For single unit installations existing as of August 13, 1990, and...
Astatine-211 imaging by a Compton camera for targeted radiotherapy.
Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi
2018-05-24
Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.
High linearity current communicating passive mixer employing a simple resistor bias
NASA Astrophysics Data System (ADS)
Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan
2013-03-01
A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.
The size effects upon shock plastic compression of nanocrystals
NASA Astrophysics Data System (ADS)
Malygin, G. A.; Klyavin, O. V.
2017-10-01
For the first time a theoretical analysis of scale effects upon the shock plastic compression of nanocrystals is implemented in the context of a dislocation kinetic approach based on the equations and relationships of dislocation kinetics. The yield point of crystals τy is established as a quantitative function of their cross-section size D and the rate of shock deformation as τy ɛ2/3 D. This dependence is valid in the case of elastic stress relaxation on account of emission of dislocations from single-pole Frank-Read sources near the crystal surface.
Preparation of VO2 thin film and its direct optical bit recording characteristics.
Fukuma, M; Zembutsu, S; Miyazawa, S
1983-01-15
Vanadium dioxide (VO2) film which has nearly the same transition point as single crystal has been obtained by reactive evaporation of vanadium on glass and subsequent annealing in N2 gas. Relations between optical properties of V02 film and its preparation conditions are presented. We made optical direct bit recording on V02 film using a laser diode as the light source. The threshold recording energy and bit density are 2 mJ/cm 2 and 350 bits/mm, respectively. We also made tungsten doping to lower the V02 film transition temperature.
2014-11-13
It is about two weeks later in Inca City and the season is officially spring. Numerous changes have occurred. Large blotches of dust cover the araneiforms. Dark spots on the ridge show places where the seasonal polar ice cap has ruptured, releasing gas and fine material from the surface below. At the bottom of the image fans point in more than one direction from a single source, showing that the wind has changed direction while gas and dust were flowing out. Was the flow continuous or has the vent opened and closed? http://photojournal.jpl.nasa.gov/catalog/PIA18893
PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Le, Jette V; Pedersen, Line B; Riisgaard, Helle; Lykkegaard, Jesper; Nexøe, Jørgen; Lemmergaard, Jeanette; Søndergaard, Jens
2016-12-01
To assess general practitioners' (GPs') information-seeking behaviour and perceived importance of sources of scientific medical information and to investigate associations with GP characteristics. A national cross-sectional survey was distributed electronically in December 2013. Danish general practice. A population of 3440 GPs (corresponding to approximately 96% of all Danish GPs). GPs' use and perceived importance of information sources. Multilevel mixed-effects logit models were used to investigate associations with GP characteristics after adjusting for relevant covariates. A total of 1580 GPs (46.4%) responded to the questionnaire. GPs' information-seeking behaviour is associated with gender, age and practice form. Single-handed GPs use their colleagues as an information source significantly less than GPs working in partnership practices and they do not use other sources more frequently. Compared with their younger colleagues, GPs aged above 44 years are less likely to seek information from colleagues, guidelines and websites, but more likely to seek information from medical journals. Male and female GPs seek information equally frequently. However, whereas male GPs are more likely than female GPs to find that pharmaceutical sales representative and non-refundable CME meetings are important, they are less likely to find that colleagues, refundable CME meetings, guidelines and websites are important. Results from this study indicate that GP characteristics should be taken into consideration when disseminating scientific medical information, to ensure that patients receive medically updated, high-quality care. KEY POINTS Research indicates that information-seeking behaviour is associated with GP characteristics. Further insights could provide opportunities for targeting information dissemination strategies. Single-handed GPs seek information from colleagues less frequently than GPs in partnerships and do not use other sources more frequently. GPs aged above 44 years do not seek information as frequently as their younger colleagues and prefer other information sources. Male and female GPs seek information equally frequently, but do not consider information sources equally important in keeping medically updated.
In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud I.; Badawi, M. S.; Ruskov, I. N.; El-Khatib, A. M.; Grozdanov, D. N.; Thabet, A. A.; Kopatch, Yu. N.; Gouda, M. M.; Skoy, V. R.
2015-01-01
Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.
All-semiconductor high-speed akinetic swept-source for OCT
NASA Astrophysics Data System (ADS)
Minneman, Michael P.; Ensher, Jason; Crawford, Michael; Derickson, Dennis
2011-12-01
A novel swept-wavelength laser for optical coherence tomography (OCT) using a monolithic semiconductor device with no moving parts is presented. The laser is a Vernier-Tuned Distributed Bragg Reflector (VT-DBR) structure exhibiting a single longitudinal mode. All-electronic wavelength tuning is achieved at a 200 kHz sweep repetition rate, 20 mW output power, over 100 nm sweep width and coherence length longer than 40 mm. OCT point-spread functions with 45- 55 dB dynamic range are demonstrated; lasers at 1550 nm, and now 1310 nm, have been developed. Because the laser's long-term tuning stability allows for electronic sample trigger generation at equal k-space intervals (electronic k-clock), the laser does not need an external optical k-clock for measurement interferometer sampling. The non-resonant, allelectronic tuning allows for continuously adjustable sweep repetition rates from mHz to 100s of kHz. Repetition rate duty cycles are continuously adjustable from single-trigger sweeps to over 99% duty cycle. The source includes a monolithically integrated power leveling feature allowing flat or Gaussian power vs. wavelength profiles. Laser fabrication is based on reliable semiconductor wafer-scale processes, leading to low and rapidly decreasing cost of manufacture.
Desmarchelier, Cristian
2010-06-01
Despite the advent of biotechnology and modern methods of combinatorial chemistry and rational drug design, nature still plays a surprisingly important role as a source of new pharmaceutical compounds. These are marketed either as herbal drugs or as single active ingredients. South American tropical ecosystems (or the Neotropics) encompass one-third of the botanical biodiversity of the planet. For centuries, indigenous peoples have been using plants for healing purposes, and scientists are making considerable efforts in order to validate these uses from a pharmacological/phytochemical point of view. However, and despite the unique plant diversity in the region, very few natural pharmaceutical ingredients from this part of the world have reached the markets in industrialized countries. The present review addresses the importance of single active ingredients and herbal drugs from South American flora as natural ingredients for pharmaceuticals; it highlights the most relevant cases in terms of species of interest; and discusses the key entry barriers for these products in industrialized countries. It explores the reasons why, in spite of the region's competitive advantages, South American biodiversity has been a poor source of natural ingredients for the pharmaceutical industry. (c) 2010 John Wiley & Sons, Ltd.
Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth
NASA Astrophysics Data System (ADS)
Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.
2017-09-01
With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
Deterministic and storable single-photon source based on a quantum memory.
Chen, Shuai; Chen, Yu-Ao; Strassel, Thorsten; Yuan, Zhen-Sheng; Zhao, Bo; Schmiedmayer, Jörg; Pan, Jian-Wei
2006-10-27
A single-photon source is realized with a cold atomic ensemble (87Rb atoms). A single excitation, written in an atomic quantum memory by Raman scattering of a laser pulse, is retrieved deterministically as a single photon at a predetermined time. It is shown that the production rate of single photons can be enhanced considerably by a feedback circuit while the single-photon quality is conserved. Such a single-photon source is well suited for future large-scale realization of quantum communication and linear optical quantum computation.
2011 Radioactive Materials Usage Survey for Unmonitored Point Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturgeon, Richard W.
This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Yi-Qun, Xu; Wei, Liu; Xin-Ye, Ni
2016-10-01
This study employs dual-source computed tomography single-spectrum imaging to evaluate the effects of contrast agent artifact removal and the computational accuracy of radiotherapy treatment planning improvement. The phantom, including the contrast agent, was used in all experiments. The amounts of iodine in the contrast agent were 30, 15, 7.5, and 0.75 g/100 mL. Two images with different energy values were scanned and captured using dual-source computed tomography (80 and 140 kV). To obtain a fused image, 2 groups of images were processed using single-energy spectrum imaging technology. The Pinnacle planning system was used to measure the computed tomography values of the contrast agent and the surrounding phantom tissue. The difference between radiotherapy treatment planning based on 80 kV, 140 kV, and energy spectrum image was analyzed. For the image with high iodine concentration, the quality of the energy spectrum-fused image was the highest, followed by that of the 140-kV image. That of the 80-kV image was the worst. The difference in the radiotherapy treatment results among the 3 models was significant. When the concentration of iodine was 30 g/100 mL and the distance from the contrast agent at the dose measurement point was 1 cm, the deviation values (P) were 5.95% and 2.20% when image treatment planning was based on 80 and 140 kV, respectively. When the concentration of iodine was 15 g/100 mL, deviation values (P) were -2.64% and -1.69%. Dual-source computed tomography single-energy spectral imaging technology can remove contrast agent artifacts to improve the calculated dose accuracy in radiotherapy treatment planning. © The Author(s) 2015.
Noradilah, Samseh Abdullah; Lee, Ii Li; Anuar, Tengku Shahrul; Salleh, Fatmah Md; Abdul Manap, Siti Nor Azreen; Mohd Mohtar, Noor Shazleen Husnie; Azrul, Syed Muhamad; Abdullah, Wan Omar
2016-01-01
In the tropics, there are too few studies on isolation of Blastocystis sp. subtypes from water sources; in addition, there is also an absence of reported studies on the occurrence of Blastocystis sp. subtypes in water during different seasons. Therefore, this study was aimed to determine the occurrence of Blastocystis sp. subtypes in river water and other water sources that drained aboriginal vicinity of highly endemic intestinal parasitic infections during wet and dry seasons. Water samples were collected from six sampling points of Sungai Krau (K1–K6) and a point at Sungai Lompat (K7) and other water sources around the aboriginal villages. The water samples were collected during both seasons, wet and dry seasons. Filtration of the water samples were carried out using a flatbed membrane filtration system. The extracted DNA from concentrated water sediment was subjected to single round polymerase chain reaction and positive PCR products were subjected to sequencing. All samples were also subjected to filtration and cultured on membrane lactose glucuronide agar for the detection of faecal coliforms. During wet season, Blastocystis sp. ST1, ST2 and ST3 were detected in river water samples. Blastocystis sp. ST3 occurrence was sustained in the river water samples during dry season. However Blastocystis sp. ST1 and ST2 were absent during dry season. Water samples collected from various water sources showed contaminations of Blastocystis sp. ST1, ST2, ST3 and ST4, during wet season and Blastocystis sp. ST1, ST3, ST8 and ST10 during dry season. Water collected from all river sampling points during both seasons showed growth of Escherichia coli and Enterobacter aerogenes, indicating faecal contamination. In this study, Blastocystis sp. ST3 is suggested as the most robust and resistant subtype able to survive in any adverse environmental condition. Restriction and control of human and animal faecal contaminations to the river and other water sources shall prevent the transmission of Blastocystis sp. to humans and animals in this aboriginal community. PMID:27761331
NASA Astrophysics Data System (ADS)
Burinskii, Alexander
2016-01-01
It is known that gravitational and electromagnetic fields of an electron are described by the ultra-extreme Kerr-Newman (KN) black hole solution with extremely high spin/mass ratio. This solution is singular and has a topological defect, the Kerr singular ring, which may be regularized by introducing the solitonic source based on the Higgs mechanism of symmetry breaking. The source represents a domain wall bubble interpolating between the flat region inside the bubble and external KN solution. It was shown recently that the source represents a supersymmetric bag model, and its structure is unambiguously determined by Bogomolnyi equations. The Dirac equation is embedded inside the bag consistently with twistor structure of the Kerr geometry, and acquires the mass from the Yukawa coupling with Higgs field. The KN bag turns out to be flexible, and for parameters of an electron, it takes the form of very thin disk with a circular string placed along sharp boundary of the disk. Excitation of this string by a traveling wave creates a circulating singular pole, indicating that the bag-like source of KN solution unifies the dressed and point-like electron in a single bag-string-quark system.
Nakano, M.; Kumagai, H.; Chouet, B.A.
2003-01-01
We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.
Point and Condensed Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian
2017-01-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
High-resolution seismic-reflection data offshore of Dana Point, southern California borderland
Sliter, Ray W.; Ryan, Holly F.; Triezenberg, Peter J.
2010-01-01
The U.S. Geological Survey collected high-resolution shallow seismic-reflection profiles in September 2006 in the offshore area between Dana Point and San Mateo Point in southern Orange and northern San Diego Counties, California. Reflection profiles were located to image folds and reverse faults associated with the San Mateo fault zone and high-angle strike-slip faults near the shelf break (the Newport-Inglewood fault zone) and at the base of the slope. Interpretations of these data were used to update the USGS Quaternary fault database and in shaking hazard models for the State of California developed by the Working Group for California Earthquake Probabilities. This cruise was funded by the U.S. Geological Survey Coastal and Marine Catastrophic Hazards project. Seismic-reflection data were acquired aboard the R/V Sea Explorer, which is operated by the Ocean Institute at Dana Point. A SIG ELC820 minisparker seismic source and a SIG single-channel streamer were used. More than 420 km of seismic-reflection data were collected. This report includes maps of the seismic-survey sections, linked to Google Earth? software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats.
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.
2002-01-01
The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.
NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...
Better Assessment Science Integrating Point and Non-point Sources (BASINS)
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.
Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.
1978-01-01
On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)
Unidentified point sources in the IRAS minisurvey
NASA Technical Reports Server (NTRS)
Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.
1984-01-01
Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.
Herschel Key Program Heritage: a Far-Infrared Source Catalog for the Magellanic Clouds
NASA Astrophysics Data System (ADS)
Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta; Babler, Brian; Engelbracht, Charles W.; Gordon, Karl; Hony, Sacha; Misselt, Karl; Montiel, Edward; Okumura, Koryo; Panuzzo, Pasquale; Roman-Duval, Julia; Sauvage, Marc; Boyer, Martha L.; Chen, C.-H. Rosie; Indebetouw, Remy; Matsuura, Mikako; Oliveira, Joana M.; Srinivasan, Sundar; van Loon, Jacco Th.; Whitney, Barbara; Woods, Paul M.
2014-12-01
Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from the Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
Multiband super-resolution imaging of graded-index photonic crystal flat lens
NASA Astrophysics Data System (ADS)
Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun
2018-05-01
Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.
Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen
2016-06-01
We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.
Very Luminous X-ray Point Sources in Starburst Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.
Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.
Scattering of focused ultrasonic beams by cavities in a solid half-space.
Rahni, Ehsan Kabiri; Hajzargarbashi, Talieh; Kundu, Tribikram
2012-08-01
The ultrasonic field generated by a point focused acoustic lens placed in a fluid medium adjacent to a solid half-space, containing one or more spherical cavities, is modeled. The semi-analytical distributed point source method (DPSM) is followed for the modeling. This technique properly takes into account the interaction effect between the cavities placed in the focused ultrasonic field, fluid-solid interface and the lens surface. The approximate analytical solution that is available in the literature for the single cavity geometry is very restrictive and cannot handle multiple cavity problems. Finite element solutions for such problems are also prohibitively time consuming at high frequencies. Solution of this problem is necessary to predict when two cavities placed in close proximity inside a solid can be distinguished by an acoustic lens placed outside the solid medium and when such distinction is not possible.
Ultra-wideband horn antenna with abrupt radiator
McEwan, T.E.
1998-05-19
An ultra-wideband horn antenna transmits and receives impulse waveforms for short-range radars and impulse time-of flight systems. The antenna reduces or eliminates various sources of close-in radar clutter, including pulse dispersion and ringing, sidelobe clutter, and feedline coupling into the antenna. Dispersion is minimized with an abrupt launch point radiator element; sidelobe and feedline coupling are minimized by recessing the radiator into a metallic horn. Low frequency cut-off associated with a horn is extended by configuring the radiator drive impedance to approach a short circuit at low frequencies. A tapered feed plate connects at one end to a feedline, and at the other end to a launcher plate which is mounted to an inside wall of the horn. The launcher plate and feed plate join at an abrupt edge which forms the single launch point of the antenna. 8 figs.
Polarization Effects Aboard the Space Interferometry Mission
NASA Technical Reports Server (NTRS)
Levin, Jason; Young, Martin; Dubovitsky, Serge; Dorsky, Leonard
2006-01-01
For precision displacement measurements, laser metrology is currently one of the most accurate measurements. Often, the measurement is located some distance away from the laser source, and as a result, stringent requirements are placed on the laser delivery system with respect to the state of polarization. Such is the case with the fiber distribution assembly (FDA) that is slated to fly aboard the Space Interferometry Mission (SIM) next decade. This system utilizes a concatenated array of couplers, polarizers and lengthy runs of polarization-maintaining (PM) fiber to distribute linearly-polarized light from a single laser to fourteen different optical metrology measurement points throughout the spacecraft. Optical power fluctuations at the point of measurement can be traced back to the polarization extinction ration (PER) of the concatenated components, in conjunction with the rate of change in phase difference of the light along the slow and fast axes of the PM fiber.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
NASA Technical Reports Server (NTRS)
Garland, J. L.; Mills, A. L.; Young, J. S.
2001-01-01
The relative effectiveness of average-well-color-development-normalized single-point absorbance readings (AWCD) vs the kinetic parameters mu(m), lambda, A, and integral (AREA) of the modified Gompertz equation fit to the color development curve resulting from reduction of a redox sensitive dye from microbial respiration of 95 separate sole carbon sources in microplate wells was compared for a dilution series of rhizosphere samples from hydroponically grown wheat and potato ranging in inoculum densities of 1 x 10(4)-4 x 10(6) cells ml-1. Patterns generated with each parameter were analyzed using principal component analysis (PCA) and discriminant function analysis (DFA) to test relative resolving power. Samples of equivalent cell density (undiluted samples) were correctly classified by rhizosphere type for all parameters based on DFA analysis of the first five PC scores. Analysis of undiluted and 1:4 diluted samples resulted in misclassification of at least two of the wheat samples for all parameters except the AWCD normalized (0.50 abs. units) data, and analysis of undiluted, 1:4, and 1:16 diluted samples resulted in misclassification for all parameter types. Ordination of samples along the first principal component (PC) was correlated to inoculum density in analyses performed on all of the kinetic parameters, but no such influence was seen for AWCD-derived results. The carbon sources responsible for classification differed among the variable types with the exception of AREA and A, which were strongly correlated. These results indicate that the use of kinetic parameters for pattern analysis in CLPP may provide some additional information, but only if the influence of inoculum density is carefully considered. c2001 Elsevier Science Ltd. All rights reserved.
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
Effect of Chlorine Substitution on Sulfide Reactivity with OH Radicals
2008-09-01
Single point energy: MP2/6-311+G(3df,2p) (LRG) • Zero Point Energy from a vibrational frequency analysis: MP2/6-31++G** ( ZPE ) • Extrapolated energy...E(QCI) + E(LARG) – E(SML) + ZPE • Characterize the TS • Use a three-point fit methodology – fit a harmonic potential to three CCSD single point
49 CFR 172.315 - Packages containing limited quantities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... applicable, for the entry as shown in the § 172.101 Table, and placed within a square-on-point border in... to the package as to be readily visible. The width of line forming the square-on-point must be at... square-on-points bearing a single ID number, or a single square-on-point large enough to include each...
Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...
The NIRCam Optical Telescope Simulator (NOTES)
NASA Technical Reports Server (NTRS)
Kubalak, David; Hakun, Claef; Greeley, Bradford; Eichorn, William; Leviton, Douglas; Guishard, Corina; Gong, Qian; Warner, Thomas; Bugby, David; Robinson, Frederick;
2007-01-01
The Near Infra-Red Camera (NIRCam), the 0.6-5.0 micron imager and wavefront sensing instrument for the James Webb Space Telescope (JWST), will be used on orbit both as a science instrument, and to tune the alignment of the telescope. The NIRCam Optical Telescope Element Simulator (NOTES) will be used during ground testing to provide an external stimulus to verify wavefront error, imaging characteristics, and wavefront sensing performance of this crucial instrument. NOTES is being designed and built by NASA Goddard Space Flight Center with the help of Swales Aerospace and Orbital Sciences Corporation. It is a single-point imaging system that uses an elliptical mirror to form an U20 image of a point source. The point source will be fed via optical fibers from outside the vacuum chamber. A tip/tilt mirror is used to change the chief ray angle of the beam as it passes through the aperture stop and thus steer the image over NIRCam's field of view without moving the pupil or introducing field aberrations. Interchangeable aperture stop elements allow us to simulate perfect JWST wavefronts for wavefront error testing, or introduce transmissive phase plates to simulate a misaligned JWST segmented mirror for wavefront sensing verification. NOTES will be maintained at an operating temperature of 80K during testing using thermal switches, allowing it to operate within the same test chamber as the NIRCam instrument. We discuss NOTES' current design status and on-going development activities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
Improved Single-Source Precursors for Solar-Cell Absorbers
NASA Technical Reports Server (NTRS)
Banger, Kulbinder K.; Harris, Jerry; Hepp, Aloysius
2007-01-01
Improved single-source precursor compounds have been invented for use in spray chemical vapor deposition (spray CVD) of chalcopyrite semiconductor absorber layers of thin-film cells. A "single-source precursor compound" is a single molecular compound that contains all the required elements, which when used under the spray CVD conditions, thermally decomposes to form CuIn(x)Ga(1-x)S(y)Se(2-y).
Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua
2013-02-01
Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.
Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.
Difilippo, Erica L; Brusseau, Mark L
2008-05-26
The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass removal was inefficient (non-ideal mass-transfer and displacement). The mass-flux-reduction/mass-removal relationship for the latter site exhibited a multi-step behavior, which cannot be predicted using some of the available simple estimation functions.
NASA Astrophysics Data System (ADS)
Yang, Shi-Yu; Cao, Zhou; Da, Dao-An; Xue, Yu-Xiong
2009-05-01
The experimental results of single event burnout induced by heavy ions and 252Cf fission fragments in power MOSFET devices have been investigated. It is concluded that the characteristics of single event burnout induced by 252Cf fission fragments is consistent to that in heavy ions. The power MOSFET in the “turn-off" state is more susceptible to single event burnout than it is in the “turn-on" state. The thresholds of the drain-source voltage for single event burnout induced by 173 MeV bromine ions and 252Cf fission fragments are close to each other, and the burnout cross section is sensitive to variation of the drain-source voltage above the threshold of single event burnout. In addition, the current waveforms of single event burnouts induced by different sources are similar. Different power MOSFET devices may have different probabilities for the occurrence of single event burnout.
Status Of The Swift Burst Alert Telescope Hard X-ray Transient Monitor
NASA Astrophysics Data System (ADS)
Krimm, Hans A.; Barthelmy, S. D.; Baumgartner, W. H.; Cummings, J.; Fenimore, E.; Gehrels, N.; Markwardt, C. B.; Palmer, D.; Sakamoto, T.; Skinner, G. K.; Stamatikos, M.; Tueller, J.
2010-01-01
The Swift Burst Alert Telescope hard X-ray transient monitor has been operating since October 1, 2006. More than 700 sources are tracked on a daily basis and light curves are produced and made available to the public on two time scales: a single Swift pointing (approximately 20 minutes) and the weighted average for each day. Of the monitored sources, approximately 33 are detected daily and another 100 have had one or more outburst during the Swift mission. The monitor is also sensitive to the detection of previously undiscovered sources and we have reported the discovery of four galactic sources and one source in the Large Magellanic Cloud. Follow-up target of opportunity observations with Swift and the Rossi X-Ray Timing Explorer have revealed that three of these new sources are pulsars and two are black hole candidates. In addition, the monitor has led to the announcement of significant outbursts from 24 different galactic and extra-galactic sources, many of which have had follow-up Swift XRT, UVOT and ground based multi-wavelength observations. The transient monitor web pages currently receive an average of 21 visits per day. We will report on the most important results from the transient monitor and also on detection and exposure statistics and outline recent and planned improvements to the monitor. The transient monitor web page is http://swift.gsfc.nasa.gov/docs/swift/results/transients/.
Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui
2013-11-01
With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.
Magnitude and Origin of Electrical Noise at Individual Grain Boundaries in Graphene.
Kochat, Vidya; Tiwary, Chandra Sekhar; Biswas, Tathagata; Ramalingam, Gopalakrishnan; Hsieh, Kimberly; Chattopadhyay, Kamanio; Raghavan, Srinivasan; Jain, Manish; Ghosh, Arindam
2016-01-13
Grain boundaries (GBs) are undesired in large area layered 2D materials as they degrade the device quality and their electronic performance. Here we show that the grain boundaries in graphene which induce additional scattering of carriers in the conduction channel also act as an additional and strong source of electrical noise especially at the room temperature. From graphene field effect transistors consisting of single GB, we find that the electrical noise across the graphene GBs can be nearly 10 000 times larger than the noise from equivalent dimensions in single crystalline graphene. At high carrier densities (n), the noise magnitude across the GBs decreases as ∝1/n, suggesting Hooge-type mobility fluctuations, whereas at low n close to the Dirac point, the noise magnitude could be quantitatively described by the fluctuations in the number of propagating modes across the GB.
Mechanical stability of a microscope setup working at a few kelvins for single-molecule localization
NASA Astrophysics Data System (ADS)
Hinohara, Takuya; Hamada, Yuki I.; Nakamura, Ippei; Matsushita, Michio; Fujiyoshi, Satoru
2013-06-01
A great advantage of single-molecule fluorescence imaging is the localization precision of molecule beyond the diffraction limit. Although longer signal-acquisition yields higher precision, acquisition time at room temperature is normally limited by photobleaching, thermal diffusion, and so on. At low temperature of a few kelvins, much longer acquisition is possible and will improve precision if the sample and the objective are held stably enough. The present work examined holding stability of the sample and objective at 1.5 K in superfluid helium in the helium bath. The stability was evaluated by localization precision of a point scattering source of a polymer bead. Scattered light was collected by the objective, and imaged by a home-built rigid imaging unit. The standard deviation of the centroid position determined for 800 images taken continuously in 17 min was 0.5 nm in the horizontal and 0.9 nm in the vertical directions.
Femtosecond X-ray coherent diffraction of aligned amyloid fibrils on low background graphene.
Seuring, Carolin; Ayyer, Kartik; Filippaki, Eleftheria; Barthelmess, Miriam; Longchamp, Jean-Nicolas; Ringler, Philippe; Pardini, Tommaso; Wojtas, David H; Coleman, Matthew A; Dörner, Katerina; Fuglerud, Silje; Hammarin, Greger; Habenstein, Birgit; Langkilde, Annette E; Loquet, Antoine; Meents, Alke; Riek, Roland; Stahlberg, Henning; Boutet, Sébastien; Hunter, Mark S; Koglin, Jason; Liang, Mengning; Ginn, Helen M; Millane, Rick P; Frank, Matthias; Barty, Anton; Chapman, Henry N
2018-05-09
Here we present a new approach to diffraction imaging of amyloid fibrils, combining a free-standing graphene support and single nanofocused X-ray pulses of femtosecond duration from an X-ray free-electron laser. Due to the very low background scattering from the graphene support and mutual alignment of filaments, diffraction from tobacco mosaic virus (TMV) filaments and amyloid protofibrils is obtained to 2.7 Å and 2.4 Å resolution in single diffraction patterns, respectively. Some TMV diffraction patterns exhibit asymmetry that indicates the presence of a limited number of axial rotations in the XFEL focus. Signal-to-noise levels from individual diffraction patterns are enhanced using computational alignment and merging, giving patterns that are superior to those obtainable from synchrotron radiation sources. We anticipate that our approach will be a starting point for further investigations into unsolved structures of filaments and other weakly scattering objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta
Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from themore » Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.« less
Entangling quantum-logic gate operated with an ultrabright semiconductor single-photon source.
Gazzano, O; Almeida, M P; Nowak, A K; Portalupi, S L; Lemaître, A; Sagnes, I; White, A G; Senellart, P
2013-06-21
We demonstrate the unambiguous entangling operation of a photonic quantum-logic gate driven by an ultrabright solid-state single-photon source. Indistinguishable single photons emitted by a single semiconductor quantum dot in a micropillar optical cavity are used as target and control qubits. For a source brightness of 0.56 photons per pulse, the measured truth table has an overlap with the ideal case of 68.4±0.5%, increasing to 73.0±1.6% for a source brightness of 0.17 photons per pulse. The gate is entangling: At a source brightness of 0.48, the Bell-state fidelity is above the entangling threshold of 50% and reaches 71.0±3.6% for a source brightness of 0.15.
NASA Astrophysics Data System (ADS)
Hayward, Christopher C.; Chapman, Scott C.; Steidel, Charles C.; Golob, Anneya; Casey, Caitlin M.; Smith, Daniel J. B.; Zitrin, Adi; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Coppin, Kristen E. K.; Farrah, Duncan; Ibar, Eduardo; Michałowski, Michał J.; Sawicki, Marcin; Scott, Douglas; van der Werf, Paul; Fazio, Giovanni G.; Geach, James E.; Gurwell, Mark; Petitpas, Glen; Wilner, David J.
2018-05-01
Interferometric observations have demonstrated that a significant fraction of single-dish submillimetre (submm) sources are blends of multiple submm galaxies (SMGs), but the nature of this multiplicity, i.e. whether the galaxies are physically associated or chance projections, has not been determined. We performed spectroscopy of 11 SMGs in six multicomponent submm sources, obtaining spectroscopic redshifts for nine of them. For an additional two component SMGs, we detected continuum emission but no obvious features. We supplement our observed sources with four single-dish submm sources from the literature. This sample allows us to statistically constrain the physical nature of single-dish submm source multiplicity for the first time. In three (3/7, { or} 43^{+39 }_{ -33} {per cent at 95 {per cent} confidence}) of the single-dish sources for which the nature of the blending is unambiguous, the components for which spectroscopic redshifts are available are physically associated, whereas 4/7 (57^{+33 }_{ -39} per cent) have at least one unassociated component. When components whose spectra exhibit continuum but no features and for which the photometric redshift is significantly different from the spectroscopic redshift of the other component are also considered, 6/9 (67^{+26 }_{ -37} per cent) of the single-dish sources are comprised of at least one unassociated component SMG. The nature of the multiplicity of one single-dish source is ambiguous. We conclude that physically associated systems and chance projections both contribute to the multicomponent single-dish submm source population. This result contradicts the conventional wisdom that bright submm sources are solely a result of merger-induced starbursts, as blending of unassociated galaxies is also important.
Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J
2014-09-01
The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
Point-source and diffuse high-energy neutrino emission from Type IIn supernovae
NASA Astrophysics Data System (ADS)
Petropoulou, M.; Coenders, S.; Vasilopoulos, G.; Kamble, A.; Sironi, L.
2017-09-01
Type IIn supernovae (SNe), a rare subclass of core collapse SNe, explode in dense circumstellar media that have been modified by the SNe progenitors at their last evolutionary stages. The interaction of the freely expanding SN ejecta with the circumstellar medium gives rise to a shock wave propagating in the dense SN environment, which may accelerate protons to multi-PeV energies. Inelastic proton-proton collisions between the shock-accelerated protons and those of the circumstellar medium lead to multimessenger signatures. Here, we evaluate the possible neutrino signal of Type IIn SNe and compare with IceCube observations. We employ a Monte Carlo method for the calculation of the diffuse neutrino emission from the SN IIn class to account for the spread in their properties. The cumulative neutrino emission is found to be ˜10 per cent of the observed IceCube neutrino flux above 60 TeV. Type IIn SNe would be the dominant component of the diffuse astrophysical flux, only if 4 per cent of all core collapse SNe were of this type and 20-30 per cent of the shock energy was channeled to accelerated protons. Lower values of the acceleration efficiency are accessible by the observation of a single Type IIn SN as a neutrino point source with IceCube using up-going muon neutrinos. Such an identification is possible in the first year following the SN shock breakout for sources within 20 Mpc.
[Laser Raman spectral investigations of the carbon structure of LiFePO4/C cathode material].
Yang, Chao; Li, Yong-Mei; Zhao, Quan-Feng; Gan, Xiang-Kun; Yao, Yao-Chun
2013-10-01
In the present paper, Laser Raman spectral was used to study the carbon structure of LiFePO4/C positive material. The samples were also been characterized by X-ray diffraction (XRD), scanning electron microscope(SEM), selected area electron diffraction (SEAD) and resistivity test. The result indicated that compared with the sp2/sp3 peak area ratios the I(D)/I(G) ratios are not only more evenly but also exhibited some similar rules. However, the studies indicated that there exist differences of I(D)/ I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample. And compared with the samples using citric acid or sucrose as carbon source, the sample which was synthetized with mixed carbon source (mixed by citric acid and sucrose) exhibited higher I(D)/I(G) ratios and sp2/sp3 peak area ratios. Also, by contrast, the differences of I(D)/I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample are less than the single carbon source samples' datas. In the scanning electron microscopy (sem) and transmission electron microscopy (sem) images, we can observed the uneven distributions of carbon coating of the primary particles and the secondary particles, this may be the main reason for not being uniform of difference data in the same sample. The obvious discreteness will affect the normal use of Raman spectroscopy in these tests.
Power-Law Template for IR Point Source Clustering
NASA Technical Reports Server (NTRS)
Addison, Graeme E.; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark; Halpern, Mark; Hincks, Adam; Hlozek, Renee;
2011-01-01
We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217,353,545 and 857 GHz, over angular scales 100 < I < 2200), the Balloonborne Large-Aperture Submillimeter Telescope (BLAST; 250, 350 and 500 microns; 1000 < I < 9000), and from correlating BLAST and Atacama Cosmology Telescope (ACT; 148 and 218 GHz) maps. We find that the clustered power over the range of angular scales and frequencies considered is well fit by a simple power law of the form C_l\\propto I(sup -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, nu(sup beta) B(nu,T_eff), with a single emissivity index beta = 2.20 +/- 0.07 and effective temperature T_eff= 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha_150-220 = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in Cosmic Microwave Background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.
NASA Astrophysics Data System (ADS)
Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.
2008-05-01
We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.
Social care and support for elderly men and women in an urban and a rural area of Nepal.
Kshetri, Dan Bahadur Baidwar; Smith, Cairns S; Khadka, Mira
2012-09-01
This study has aimed to describe the care and support for urban and rural elderly people of Bhaktapur district, Nepal. Efforts were made to identify the feeling of some features of general well-beings associated to mental health, person responsible for care and support, capability to perform daily routine activities, sources of finance and ownership to the property. More than half of the respondents were found having single or multiple features of loneliness, anxiety, depression and insomnia. The rate of point prevalence loneliness was found higher in the above 80 years of age, urban respondents. Almost 9 in 10 respondents were capable themselves to dress, walk and maintain personal hygiene and majority of them were assisted by spouse, son/daughter-in-laws. Family support was common sources of income and ownership to the property was absolutely high.
Akselrod, Gleb M.; Weidman, Mark C.; Li, Ying; ...
2016-09-13
Infrared (IR) light sources with high modulation rates are critical components for on-chip optical communications. Lead-based colloidal quantum dots are promising nonepitaxial materials for use in IR light-emitting diodes, but their slow photoluminescence lifetime is a serious limitation. Here we demonstrate coupling of PbS quantum dots to colloidal plasmonic nanoantennas based on film-coupled metal nanocubes, resulting in a dramatic 1300-fold reduction in the emission lifetime from the microsecond to the nanosecond regime. This lifetime reduction is primarily due to a 1100-fold increase in the radiative decay rate owing to the high quantum yield (65%) of the antenna. The short emissionmore » lifetime is accompanied by high antenna quantum efficiency and directionality. Lastly, this nonepitaxial platform points toward GHz frequency, electrically modulated, telecommunication wavelength light-emitting diodes and single-photon sources.« less
Computing Fault Displacements from Surface Deformations
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy
2006-01-01
Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work
Methods and apparatus for broadband frequency comb stabilization
Cox, Jonathan A; Kaertner, Franz X
2015-03-17
Feedback loops can be used to shift and stabilize the carrier-envelope phase of a frequency comb from a mode-locked fibers laser or other optical source. Compared to other frequency shifting and stabilization techniques, feedback-based techniques provide a wideband closed-loop servo bandwidth without optical filtering, beam pointing errors, or group velocity dispersion. It also enables phase locking to a stable reference, such as a Ti:Sapphire laser, continuous-wave microwave or optical source, or self-referencing interferometer, e.g., to within 200 mrad rms from DC to 5 MHz. In addition, stabilized frequency combs can be coherently combined with other stable signals, including other stabilized frequency combs, to synthesize optical pulse trains with pulse durations of as little as a single optical cycle. Such a coherent combination can be achieved via orthogonal control, using balanced optical cross-correlation for timing stabilization and balanced homodyne detection for phase stabilization.
Predictions for Swift Follow-up Observations of Advanced LIGO/Virgo Gravitational Wave Sources
NASA Astrophysics Data System (ADS)
Racusin, Judith; Evans, Phil; Connaughton, Valerie
2015-04-01
The likely detection of gravitational waves associated with the inspiral of neutron star binaries by the upcoming advanced LIGO/Virgo observatories will be complemented by searches for electromagnetic counterparts over large areas of the sky by Swift and other observatories. As short gamma-ray bursts (GRB) are the most likely electromagnetic counterpart candidates to these sources, we can make predictions based upon the last decade of GRB observations by Swift and Fermi. Swift is uniquely capable of accurately localizing new transients rapidly over large areas of the sky in single and tiled pointings, enabling ground-based follow-up. We describe simulations of the detectability of short GRB afterglows by Swift given existing and hypothetical tiling schemes with realistic observing conditions and delays, which guide the optimal observing strategy and improvements provided by coincident detection with observatories such as Fermi-GBM.
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.
Nustar and Chandra Insight into the Nature of the 3-40 Kev Nuclear Emission in Ngc 253
NASA Technical Reports Server (NTRS)
Lehmer, Bret D.; Wik, Daniel R.; Hornschemeier, Ann E.; Ptak, Andrew; Antoniu, V.; Argo, M.K.; Bechtol, K.; Boggs, S.; Christensen, F.E.; Craig, W.W.;
2013-01-01
We present results from three nearly simultaneous Nuclear Spectroscopic Telescope Array (NuSTAR) and Chandra monitoring observations between 2012 September 2 and 2012 November 16 of the local star-forming galaxy NGC 253. The 3-40 kiloelectron volt intensity of the inner approximately 20 arcsec (approximately 400 parsec) nuclear region, as measured by NuSTAR, varied by a factor of approximately 2 across the three monitoring observations. The Chandra data reveal that the nuclear region contains three bright X-ray sources, including a luminous (L (sub 2-10 kiloelectron volt) approximately few × 10 (exp 39) erg per s) point source located approximately 1 arcsec from the dynamical center of the galaxy (within the sigma 3 positional uncertainty of the dynamical center); this source drives the overall variability of the nuclear region at energies greater than or approximately equal to 3 kiloelectron volts. We make use of the variability to measure the spectra of this single hard X-ray source when it was in bright states. The spectra are well described by an absorbed (power-law model spectral fit value, N(sub H), approximately equal to 1.6 x 10 (exp 23) per square centimeter) broken power-law model with spectral slopes and break energies that are typical of ultraluminous X-ray sources (ULXs), but not active galactic nuclei (AGNs). A previous Chandra observation in 2003 showed a hard X-ray point source of similar luminosity to the 2012 source that was also near the dynamical center (Phi is approximately equal to 0.4 arcsec); however, this source was offset from the 2012 source position by approximately 1 arcsec. We show that the probability of the 2003 and 2012 hard X-ray sources being unrelated is much greater than 99.99% based on the Chandra spatial localizations. Interestingly, the Chandra spectrum of the 2003 source (3-8 kiloelectron volts) is shallower in slope than that of the 2012 hard X-ray source. Its proximity to the dynamical center and harder Chandra spectrum indicate that the 2003 source is a better AGN candidate than any of the sources detected in our 2012 campaign; however, we were unable to rule out a ULX nature for this source. Future NuSTAR and Chandra monitoring would be well equipped to break the degeneracy between the AGN and ULX nature of the 2003 source, if again caught in a high state.
Multi-beam and single-chip LIDAR with discrete beam steering by digital micromirror device
NASA Astrophysics Data System (ADS)
Rodriguez, Joshua; Smith, Braden; Hellman, Brandon; Gin, Adley; Espinoza, Alonzo; Takashima, Yuzuru
2018-02-01
A novel Digital Micromirror Device (DMD) based beam steering enables a single chip Light Detection and Ranging (LIDAR) system for discrete scanning points. We present increasing number of scanning point by using multiple laser diodes for Multi-beam and Single-chip DMD-based LIDAR.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
X-ray Spectropolarimetry of Z-pinch Plasmas with a Single-Crystal Technique
NASA Astrophysics Data System (ADS)
Wallace, Matt; Haque, Showera; Neill, Paul; Pereira, Nino; Presura, Radu
2017-10-01
When directed beams of energetic electrons exist in a plasma the resulting x-rays emitted by the plasma can be partially polarized. This makes plasma x-ray polarization spectroscopy, spectropolarimetry, useful for revealing information about the anisotropy of the electron velocity distribution. X-ray spectropolarimetry has indeed been used for this in both space and laboratory plasmas. X-ray polarization measurements are typically performed employing two crystals, both at a 45° Bragg angle. A single-crystal spectropolarimeter can replace two crystal schemes by utilizing two matching sets of internal planes for polarization-splitting. The polarization-splitting planes diffract the incident x-rays into two directions that are perpendicular to each other and the incident beam as well, so the two sets of diffracted x-rays are linearly polarized perpendicularly to each other. An X-cut quartz crystal with surface along the [11-20] planes and a paired set of [10-10] planes in polarization-splitting orientation is now being used on aluminum z-pinches at the University of Nevada, Reno. Past x-ray polarization measurements have been reserved for point-like sources. Recently a slotted collimating aperture has been used to maintain the required geometry for polarization-splitting enabling the spectropolarimetry of extended sources. The design of a single-crystal x-ray spectropolarimeter and experimental results will be presented. Work was supported by U.S. DOE, NNSA Grant DE-NA0001834 and cooperative agreement DE-FC52-06NA27616.
McDuff, Susan G. R.; Frankel, Hillary C.; Norman, Kenneth A.
2009-01-01
We used multi-voxel pattern analysis (MVPA) of fMRI data to gain insight into how subjects’ retrieval agendas influence source memory judgments (was item X studied using source Y?). In Experiment 1, we used a single-agenda test where subjects judged whether items were studied with the targeted source or not. In Experiment 2, we used a multi-agenda test where subjects judged whether items were studied using the targeted source, studied using a different source, or nonstudied. To evaluate the differences between single- and multi-agenda source monitoring, we trained a classifier to detect source-specific fMRI activity at study, and then we applied the classifier to data from the test phase. We focused on trials where the targeted source and the actual source differed, so we could use MVPA to track neural activity associated with both the targeted source and the actual source. Our results indicate that single-agenda monitoring was associated with increased focus on the targeted source (as evidenced by increased targeted-source activity, relative to baseline) and reduced use of information relating to the actual, non-target source. In the multi-agenda experiment, high-levels of actual-source activity were associated with increased correct rejections, suggesting that subjects were using recollection of actual-source information to avoid source memory errors. In the single-agenda experiment, there were comparable levels of actual-source activity (suggesting that recollection was taking place), but the relationship between actual-source activity and behavior was absent (suggesting that subjects were failing to make proper use of this information). PMID:19144851
DiMasi, Joseph A; Smith, Zachary; Getz, Kenneth A
2018-05-10
The extent to which new drug developers can benefit financially from shorter development times has implications for development efficiency and innovation incentives. We provided a real-world example of such gains by using recent estimates of drug development costs and returns. Time and fee data were obtained on 5 single-source manufacturing projects. Time and fees were modeled for these projects as if the drug substance and drug product processes had been contracted separately from 2 vendors. The multi-vendor model was taken as the base case, and financial impacts from single-source contracting were determined relative to the base case. The mean and median after-tax financial benefits of shorter development times from single-source contracting were $44.7 million and $34.9 million, respectively (2016 dollars). The after-tax increases in sponsor fees from single-source contracting were small in comparison (mean and median of $0.65 million and $0.25 million). For the data we examined, single-source contracting yielded substantial financial benefits over multi-source contracting, even after accounting for somewhat higher sponsor fees. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
NASA Astrophysics Data System (ADS)
Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei
2013-09-01
China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.