Sample records for calculated timing resolution

  1. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  2. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements

    NASA Astrophysics Data System (ADS)

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.

  3. The timing resolution of scintillation-detector systems: Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Choong, Woon-Seng

    2009-11-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.

  4. The timing resolution of scintillation-detector systems: Monte Carlo analysis.

    PubMed

    Choong, Woon-Seng

    2009-11-07

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.

  5. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  6. On the vertical resolution for near-nadir looking spaceborne rain radar

    NASA Astrophysics Data System (ADS)

    Kozu, Toshiaki

    A definition of radar resolution for an arbitrary direction is proposed and used to calculate the vertical resolution for a near-nadir looking spaceborne rain radar. Based on the calculation result, a scanning strategy is proposed which efficiently distributes the measurement time to each angle bin and thus increases the number of independent samples compared with a simple linear scanning.

  7. Method of calculating tsunami travel times in the Andaman Sea region

    PubMed Central

    Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G.

    2014-01-01

    A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region. PMID:25741129

  8. Method of calculating tsunami travel times in the Andaman Sea region.

    PubMed

    Kietpawpan, Monte; Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G

    2008-07-01

    A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region.

  9. Zone plate method for electronic holographic display using resolution redistribution technique.

    PubMed

    Takaki, Yasuhiro; Nakamura, Junya

    2011-07-18

    The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.

  10. Simulation and optimization of a dc SQUID with finite capacitance

    NASA Astrophysics Data System (ADS)

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  11. Application of spatially resolved high resolution crystal spectrometry to inertial confinement fusion plasmas.

    PubMed

    Hill, K W; Bitter, M; Delgado-Aparacio, L; Pablant, N A; Beiersdorfer, P; Schneider, M; Widmann, K; Sanchez del Rio, M; Zhang, L

    2012-10-01

    High resolution (λ∕Δλ ∼ 10 000) 1D imaging x-ray spectroscopy using a spherically bent crystal and a 2D hybrid pixel array detector is used world wide for Doppler measurements of ion-temperature and plasma flow-velocity profiles in magnetic confinement fusion plasmas. Meter sized plasmas are diagnosed with cm spatial resolution and 10 ms time resolution. This concept can also be used as a diagnostic of small sources, such as inertial confinement fusion plasmas and targets on x-ray light source beam lines, with spatial resolution of micrometers, as demonstrated by laboratory experiments using a 250-μm (55)Fe source, and by ray-tracing calculations. Throughput calculations agree with measurements, and predict detector counts in the range 10(-8)-10(-6) times source x-rays, depending on crystal reflectivity and spectrometer geometry. Results of the lab demonstrations, application of the technique to the National Ignition Facility (NIF), and predictions of performance on NIF will be presented.

  12. Fast, large-scale hologram calculation in wavelet domain

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  13. On the dynamic readout characteristic of nonlinear super-resolution optical storage

    NASA Astrophysics Data System (ADS)

    Wei, Jingsong

    2013-03-01

    Researchers have developed nonlinear super-resolution optical storage for the past twenty years. However, several concerns remain, including (1) the presence of readout threshold power; (2) the increase of threshold power with the reduction of the mark size, and (3) the increase of the carrier-to-noise ratio (CNR) at the initial stage and then decrease with the increase of readout laser power or laser irradiation time. The present work calculates and analyzes the super-resolution spot formed by the thin film masks and the readout threshold power characteristic according to the derived formula and based on the nonlinear saturable absorption characteristic and threshold of structural change. The obtained theoretical calculation and experimental data answer the concerns regarding the dynamic readout threshold characteristic and CNR dependence on laser power and irradiation time. The near-field optical spot scanning experiment further verifies the super-resolution spot formation produced through the nonlinear thin film masks.

  14. Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2014-03-01

    We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.

  15. On a fast calculation of structure factors at a subatomic resolution.

    PubMed

    Afonine, P V; Urzhumtsev, A

    2004-01-01

    In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.

  16. Conversational high resolution mass spectrographic data reduction

    NASA Technical Reports Server (NTRS)

    Romiez, M. P.

    1973-01-01

    A FORTRAN 4 program is described which reduces the data obtained from a high resolution mass spectrograph. The program (1) calculates an accurate mass for each line on the photoplate, and (2) assigns elemental compositions to each accurate mass. The program is intended for use in a time-shared computing environment and makes use of the conversational aspects of time-sharing operating systems.

  17. Influence of imaging resolution on color fidelity in digital archiving.

    PubMed

    Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari

    2015-11-01

    Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.

  18. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  19. High-sensitivity Leak-testing Method with High-Resolution Integration Technique

    NASA Astrophysics Data System (ADS)

    Fujiyoshi, Motohiro; Nonomura, Yutaka; Senda, Hidemi

    A high-resolution leak-testing method named HR (High-Resolution) Integration Technique has been developed for MEMS (Micro Electro Mechanical Systems) sensors such as a vibrating angular-rate sensor housed in a vacuum package. Procedures of the method to obtain high leak-rate resolution were as follows. A package filled with helium gas was kept in a small accumulation chamber to accumulate helium gas leaking from the package. After the accumulation, the accumulated helium gas was introduced into a mass spectrometer in a short period of time, and the flux of the helium gas was measured by the mass spectrometer as a transient phenomenon. The leak-rate of the package was calculated from the detected transient waveform of the mass spectrometer and the accumulation time of the helium gas in the accumulation chamber. Because the density of the helium gas in the vacuum chamber increased and the accumulated helium gas was measured in a very short period of time with the mass spectrometer, the peak strength of the transient waveform became high and the signal to noise ratio was much improved. The detectable leak-rate resolution of the technique reached 1×10-15 (Pa·m3/s). This resolution is 103 times superior to that of the conventional helium vacuum integration method. The accuracy of the measuring system was verified with a standard helium gas leak source. The results were well matched between theoretical calculation based on the leak-rate of the source and the experimental results within only 2% error.

  20. A study of timing properties of Silicon Photomultipliers

    NASA Astrophysics Data System (ADS)

    Avella, Paola; De Santo, Antonella; Lohstroh, Annika; Sajjad, Muhammad T.; Sellin, Paul J.

    2012-12-01

    Silicon Photomultipliers (SiPMs) are solid-state pixelated photodetectors. Lately these sensors have been investigated for Time of Flight Positron Emission Tomography (ToF-PET) applications, where very good coincidence time resolution of the order of hundreds of picoseconds imply spatial resolution of the order of cm in the image reconstruction. The very fast rise time typical of the avalanche discharge improves the time resolution, but can be limited by the readout electronics and the technology used to construct the device. In this work the parameters of the equivalent circuit of the device that directly affect the pulse shape, namely the quenching resistance and capacitance and the diode and parasitic capacitances, were calculated. The mean rise time obtained with different preamplifiers was also measured.

  1. A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall

    NASA Astrophysics Data System (ADS)

    Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian

    2018-02-01

    Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.

  2. Real-Time Nanoscopy by Using Blinking Enhanced Quantum Dots

    PubMed Central

    Watanabe, Tomonobu M.; Fukui, Shingo; Jin, Takashi; Fujii, Fumihiko; Yanagida, Toshio

    2010-01-01

    Superresolution optical microscopy (nanoscopy) is of current interest in many biological fields. Superresolution optical fluctuation imaging, which utilizes higher-order cumulant of fluorescence temporal fluctuations, is an excellent method for nanoscopy, as it requires neither complicated optics nor illuminations. However, it does need an impractical number of images for real-time observation. Here, we achieved real-time nanoscopy by modifying superresolution optical fluctuation imaging and enhancing the fluctuation of quantum dots. Our developed quantum dots have higher blinking than commercially available ones. The fluctuation of the blinking improved the resolution when using a variance calculation for each pixel instead of a cumulant calculation. This enabled us to obtain microscopic images with 90-nm and 80-ms spatial-temporal resolution by using a conventional fluorescence microscope without any optics or devices. PMID:20923631

  3. Investigation of a Multi-Anode Microchannel Plate PMT for Time-of-Flight PET

    NASA Astrophysics Data System (ADS)

    Choong, Woon-Seng

    2010-10-01

    We report on an investigation of a mulit-anode microchannel plate PMT for time-of-flight PET detector modules. The primary advantages of an MCP lie in its excellent timing properties (fast rise time and low transit time spread), compact size, and reasonably large active area, thus making it a good candidate for TOF applications. In addition, the anode can be segmented into an array of collection electrodes with fine pitch to attain good position sensitivity. In this paper, we investigate using the Photonis Planacon MCP-PMT with a pore size of 10 μm to construct a PET detector module, specifically for time-of-flight applications. We measure the single electron response by exciting the Planacon with pulsed laser diode. We also measure the performance of the Planacon as a PET detector by coupling a 4 mm×4 mm×10 mm LSO crystal to individual pixel to study its gain uniformity, energy resolution, and timing resolution. The rise time of the Planacon is 440 ps with pulse duration of about 1 ns. A transit time spread of 120 ps FWHM is achieved. The gain is fairly uniform across the central region of the Planacon, but drops off by as much as a factor of 2.5 around the edges. The energy resolution is fairly uniform across the Planacon with an average value of 18.6 ± 0.7% FWHM. While the average timing resolution of 252 ± 7 ps FWHM is achieved in the central region of the Planacon, it degrades to 280 ± 9 ps FWHM for edge pixels and 316 ± 15 ps FWHM for corner pixels. We compare the results with measurements performed with a fast timing conventional PMT (Hamamatsu R-9800). We find that the R9800, which has significantly higher PDE, has a better timing resolution than the Planacon. Furthermore, we perform detector simulations to calculate the improvement that can be achieved with a higher PDE Planacon. The calculation shows that the Planacon can achieve significantly better timing resolution if it can attain the same PDE as the R-9800, while only a 30% improvement is needed to yield a similar timing resolution as the R-9800.

  4. Development of a CSP plant energy yield calculation tool applying predictive models to analyze plant performance sensitivities

    NASA Astrophysics Data System (ADS)

    Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons

    2017-06-01

    At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.

  5. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  6. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka

    2009-06-01

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.

  7. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  8. Photoionization in the time and frequency domain

    NASA Astrophysics Data System (ADS)

    Isinger, M.; Squibb, R. J.; Busto, D.; Zhong, S.; Harth, A.; Kroon, D.; Nandi, S.; Arnold, C. L.; Miranda, M.; Dahlström, J. M.; Lindroth, E.; Feifel, R.; Gisselbrecht, M.; L'Huillier, A.

    2017-11-01

    Ultrafast processes in matter, such as the electron emission after light absorption, can now be studied using ultrashort light pulses of attosecond duration (10-18 seconds) in the extreme ultraviolet spectral range. The lack of spectral resolution due to the use of short light pulses has raised issues in the interpretation of the experimental results and the comparison with theoretical calculations. We determine photoionization time delays in neon atoms over a 40-electron volt energy range with an interferometric technique combining high temporal and spectral resolution. We spectrally disentangle direct ionization from ionization with shake-up, in which a second electron is left in an excited state, and obtain excellent agreement with theoretical calculations, thereby solving a puzzle raised by 7-year-old measurements.

  9. High-Resolution Mapping of Thermal History in Polymer Nanocomposites: Gold Nanorods as Microscale Temperature Sensors.

    PubMed

    Kennedy, W Joshua; Slinker, Keith A; Volk, Brent L; Koerner, Hilmar; Godar, Trenton J; Ehlert, Gregory J; Baur, Jeffery W

    2015-12-23

    A technique is reported for measuring and mapping the maximum internal temperature of a structural epoxy resin with high spatial resolution via the optically detected shape transformation of embedded gold nanorods (AuNRs). Spatially resolved absorption spectra of the nanocomposites are used to determine the frequencies of surface plasmon resonances. From these frequencies the AuNR aspect ratio is calculated using a new analytical approximation for the Mie-Gans scattering theory, which takes into account coincident changes in the local dielectric. Despite changes in the chemical environment, the calculated aspect ratio of the embedded nanorods is found to decrease over time to a steady-state value that depends linearly on the temperature over the range of 100-200 °C. Thus, the optical absorption can be used to determine the maximum temperature experienced at a particular location when exposure times exceed the temperature-dependent relaxation time. The usefulness of this approach is demonstrated by mapping the temperature of an internally heated structural epoxy resin with 10 μm lateral spatial resolution.

  10. Discovery of Finely Structured Dynamic Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to be smoothly varying, i.e. have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70 percent of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.

  11. DISCOVERY OF FINELY STRUCTURED DYNAMIC SOLAR CORONA OBSERVED IN THE Hi-C TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Savage, Sabrina

    In the Summer of 2012, the High-resolution Coronal Imager (Hi-C) flew on board a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to bemore » smoothly varying, i.e., have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70% of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.« less

  12. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  13. Variation Among Internet Based Calculators in Predicting Spontaneous Resolution of Vesicoureteral Reflux

    PubMed Central

    Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.

    2010-01-01

    Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550

  14. Precisely and Accurately Inferring Single-Molecule Rate Constants

    PubMed Central

    Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.

    2017-01-01

    The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280

  15. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  16. Modeling of the energy resolution of a 1 meter and a 3 meter time of flight positron annihilation induced Auger electron spectrometers

    NASA Astrophysics Data System (ADS)

    Fairchild, A.; Chirayath, V.; Gladen, R.; McDonald, A.; Lim, Z.; Chrysler, M.; Koymen, A.; Weiss, A.

    Simion 8.1®simulations were used to determine the energy resolution of a 1 meter long Time of Flight Positron annihilation induced Auger Electron Spectrometer (TOF-PAES). The spectrometer consists of: 1. a magnetic gradient section used to parallelize the electrons leaving the sample along the beam axis, 2. an electric field free time of flight tube and 3. a detection section with a set of ExB plates that deflect electrons exiting the TOF tube into a Micro-Channel Plate (MCP). Simulations of the time of flight distribution of electrons emitted according to a known secondary electron emission distribution, for various sample biases, were compared to experimental energy calibration peaks and found to be in excellent agreement. The TOF spectra at the highest sample bias was used to determine the timing resolution function describing the timing spread due to the electronics. Simulations were then performed to calculate the energy resolution at various electron energies in order to deconvolute the combined influence of the magnetic field parallelizer, the timing resolution, and the voltage gradient at the ExB plates. The energy resolution of the 1m TOF-PAES was compared to a newly constructed 3 meter long system. The results were used to optimize the geometry and the potentials of the ExB plates for obtaining the best energy resolution. This work was supported by NSF Grant NSF Grant No. DMR 1508719 and DMR 1338130.

  17. Electronic state spectroscopy by high-resolution vacuum ultraviolet photoabsorption, He(I) photoelectron spectroscopy and ab initio calculations of ethyl acetate

    NASA Astrophysics Data System (ADS)

    Śmialek, Malgorzata A.; Łabuda, Marta; Guthmuller, Julien; Hubin-Franskin, Marie-Jeanne; Delwiche, Jacques; Hoffmann, Søren Vrønning; Jones, Nykola C.; Mason, Nigel J.; Limão-Vieira, Paulo

    2016-06-01

    The high-resolution vacuum ultraviolet photoabsorption spectrum of ethyl acetate, C4H8O2, is presented over the energy range 4.5-10.7 eV (275.5-116.0 nm). Valence and Rydberg transitions and their associated vibronic series observed in the photoabsorption spectrum, have been assigned in accordance with new ab initio calculations of the vertical excitation energies and oscillator strengths. Also, the photoabsorption cross sections have been used to calculate the photolysis lifetime of this ester in the upper stratosphere (20-50 km). Calculations have also been carried out to determine the ionisation energies and fine structure of the lowest ionic state of ethyl acetate and are compared with a newly recorded photoelectron spectrum (from 9.5 to 16.7 eV). Vibrational structure is observed in the first photoelectron band of this molecule for the first time.

  18. A Submillimeter Resolution PET Prototype Evaluated With an 18F Inkjet Printed Phantom

    NASA Astrophysics Data System (ADS)

    Schneider, Florian R.; Hohberg, Melanie; Mann, Alexander B.; Paul, Stephan; Ziegler, Sibylle I.

    2015-10-01

    This work presents a submillimeter resolution PET (Positron Emission Tomography) scanner prototype based on SiPM/MPPC arrays (Silicon Photomultiplier/Multi Pixel Photon Counter). Onto each active area a 1 ×1 ×20 mm3 LYSO (Lutetium-Yttrium-Oxyorthosilicate) scintillator crystal is coupled one-to-one. Two detector modules facing each other in a distance of 10.0 cm have been set up with in total 64 channels that are digitized by SADCs (Sampling Analog to Digital Converters) with 80 MHz, 10 bit resolution and FPGA (Field Programmable Gate Array) based extraction of energy and time information. Since standard phantoms are not sufficient for testing submillimeter resolution at which positron range is an issue, a 18F inkjet printed phantom has been used to explore the limit in spatial resolution. The phantom could be successfully reconstructed with an iterative MLEM (Maximum Likelihood Expectation Maximization) and an analytically calculated system matrix based on the DRF (Detector Response Function) model. The system yields a coincidence time resolution of 4.8 ns FWHM, an energy resolution of 20%-30% FWHM and a spatial resolution of 0.8 mm.

  19. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  20. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  1. Curved crystal x-ray optics for monochromatic imaging with a clinical source.

    PubMed

    Bingölbali, Ayhan; MacDonald, C A

    2009-04-01

    Monochromatic x-ray imaging has been shown to increase contrast and reduce dose relative to conventional broadband imaging. However, clinical sources with very narrow energy bandwidth tend to have limited intensity and field of view. In this study, focused fan beam monochromatic radiation was obtained using doubly curved monochromator crystals. While these optics have been in use for microanalysis at synchrotron facilities for some time, this work is the first investigation of the potential application of curved crystal optics to clinical sources for medical imaging. The optics could be used with a variety of clinical sources for monochromatic slot scan imaging. The intensity was assessed and the resolution of the focused beam was measured using a knife-edge technique. A simulation model was developed and comparisons to the measured resolution were performed to verify the accuracy of the simulation to predict resolution for different conventional sources. A simple geometrical calculation was also developed. The measured, simulated, and calculated resolutions agreed well. Adequate resolution and intensity for mammography were predicted for appropriate source/optic combinations.

  2. The Substructure of the Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore calculate how the intensity scales from a low-resolution (AIA) pixels to high-resolution (Hi-C) pixels for both the dynamic events and "background" emission (meaning, the steady emission over the 5 minutes of data acquisition time). We find there is no evidence of substructure in the background corona; the intensity scales smoothly from low-resolution to high-resolution Hi-C pixels. In transient events, however, the intensity observed with Hi-C is, on average, 2.6 times larger than observed with AIA. This increase in intensity suggests that AIA is not resolving these events. This result suggests a finely structured dynamic corona embedded in a smoothly varying background.

  3. Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer

    NASA Astrophysics Data System (ADS)

    Granroth, G. E.; Hahn, S. E.

    2015-01-01

    Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL), has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores). This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.

  4. Sensitivity of LES results from turbine rim seals to changes in grid resolution and sector size

    NASA Astrophysics Data System (ADS)

    O'Mahoney, T.; Hills, N.; Chew, J.

    2012-07-01

    Large-Eddy Simulations (LES) were carried out for a turbine rim seal and the sensitivity of the results to changes in grid resolution and the size of the computational domain are investigated. Ingestion of hot annulus gas into the rotor-stator cavity is compared between LES results and against experiments and Unsteady Reynolds-Averaged Navier-Stokes (URANS) calculations. The LES calculations show greater ingestion than the URANS calculation and show better agreement with experiments. Increased grid resolution shows a small improvement in ingestion predictions whereas increasing the sector model size has little effect on the results. The contrast between the different CFD models is most stark in the inner cavity, where the URANS shows almost no ingestion. Particular attention is also paid to the presence of low frequency oscillations in the disc cavity. URANS calculations show such low frequency oscillations at different frequencies than the LES. The oscillations also take a very long time to develop in the LES. The results show that the difficult problem of estimating ingestion through rim seals could be overcome by using LES but that the computational requirements were still restrictive.

  5. High Resolution Integrated Hohlraum-Capsule Simulations for Virtual NIF Ignition Campaign

    NASA Astrophysics Data System (ADS)

    Jones, O. S.; Marinak, M. M.; Cerjan, C. J.; Clark, D. S.; Edwards, M. J.; Haan, S. W.; Langer, S. H.; Salmonson, J. D.

    2009-11-01

    We have undertaken a virtual campaign to assess the viability of the sequence of NIF experiments planned for 2010 that will experimentally tune the shock timing, symmetry, and ablator thickness of a cryogenic ignition capsule prior to the first ignition attempt. The virtual campaign consists of two teams. The ``red team'' creates realistic simulated diagnostic data for a given experiment from the output of a detailed radiation hydrodynamics calculation that has physics models that have been altered in a way that is consistent with probable physics uncertainties. The ``blue team'' executes a series of virtual experiments and interprets the simulated diagnostic data from those virtual experiments. To support this effort we have developed a capability to do very high spatial resolution integrated hohlraum-capsule simulations using the Hydra code. Surface perturbations for all ablator layer surfaces and the DT ice layer are calculated explicitly through mode 30. The effects of the fill tube, cracks in the ice layer, and defects in the ablator are included in models extracted from higher resolution calculations. Very high wave number mix is included through a mix model. We will show results from these calculations in the context of the ongoing virtual campaign.

  6. Application of a chromatography model with linear gradient elution experimental data to the rapid scale-up in ion-exchange process chromatography of proteins.

    PubMed

    Ishihara, Takashi; Kadoya, Toshihiko; Yamamoto, Shuichi

    2007-08-24

    We applied the model described in our previous paper to the rapid scale-up in the ion exchange chromatography of proteins, in which linear flow velocity, column length and gradient slope were changed. We carried out linear gradient elution experiments, and obtained data for the peak salt concentration and peak width. From these data, the plate height (HETP) was calculated as a function of the mobile phase velocity and iso-resolution curve (the separation time and elution volume relationship for the same resolution) was calculated. The scale-up chromatography conditions were determined by the iso-resolution curve. The scale-up of the linear gradient elution from 5 to 100mL and 2.5L column sizes was performed both by the separation of beta-lactoglobulin A and beta-lactoglobulin B with anion-exchange chromatography and by the purification of a recombinant protein with cation-exchange chromatography. Resolution, recovery and purity were examined in order to verify the proposed method.

  7. Timing Characterization of Helium-4 Fast Neutron Detector with EJ-309 Organic Liquid Scintillator

    NASA Astrophysics Data System (ADS)

    Liang, Yinong; Zhu, Ting; Enqvist, Andreas

    2018-01-01

    Recently, the Helium-4 gas fast neutron scintillation detectors is being used in time-sensitive measurements, such time-of-flight and multiplicity counting. In this paper, a set of time aligned signals was acquired in a coincidence measurement using the Helium-4 gas detectors and EJ-309 liquid scintillators. The high-speed digitizer system is implanted with a trigger moving average window (MAW) unit combing with its constant fraction discriminator (CFD) feature. It can calculate a "time offset" to the timestamp value to get a higher resolution timestamp (up to 50 ps), which is better than the digitizer's time resolution (4 ns) [1]. The digitized waveforms were saved to the computer hard drive and post processed with digital analysis code to determine the difference of their arrival times. The full-width at half-maximum (FWHM) of the Gaussian fit was used as to examine the resolution. For the cascade decay of Cobalt-60 (1.17 and 1.33 MeV), the first version of the Helium-4 detector with two Hamamatsu R580 photomultipliers (PMT) installed at either end of the cylindrical gas chamber (20 cm in length and 4.4 cm in diameter) has a time resolution which is about 3.139 ns FWHM. With improved knowledge of the timing performance, the Helium-4 scintillation detectors are excellent for neutron energy spectrometry applications requiring high temporal and energy resolutions.

  8. Effect of DEM resolution and comparison between different weighting factors for hydrologic connectivity index

    NASA Astrophysics Data System (ADS)

    Cantreul, Vincent; Cavalli, Marco; Degré, Aurore

    2016-04-01

    The emerging concept of hydrological connectivity is difficult to quantify. Some indices have been proposed. The most cited is Borselli's one. It mainly uses the DEM as input. The pixel size may strongly impacts the result of the calculation. It has not been studied yet in silty areas. Another important aspect is the choice of the weighting factor which strongly influences the index value. The objective of this poster is so to compare 8 different DEM's resolutions (12, 24, 48, 72, 96, 204, 504 and 996cm) and 3 different weighting factors (factor C of Wischmeier, Manning's factor and rugosity index) in the Borselli's index calculation. The IC was calculated in a 124ha catchment (Hevillers), in the loess belt, in Belgium. The DEM used is coming from a UAV with a maximum resolution of 12 cm. Permanent covered surfaces are not considered in order to avoid artefact due to the vegetation (2% of the surface). Regarding the DEM pixel size, the IC increases for a given pixel when the pixel size decreases. That confirms some results observed in the Alpine region by Cavalli (2014). The mean difference between 12 cm and 10 m resolution is 35% with higher values up to 100% for higher connectivity zones (flow paths). Another result is the lower impact of connections in the watershed (grass strips…) at lower pixel sizes. This is linked to the small width of some connections which are sometimes comparing to cell size. Furthermore, a great loss of precision is observed from the 500 cm pixel size and upper. That remark is quite intuitive. Finally, some very well disconnected zones appear for the highest resolutions. Regarding the weighting factor, IC values calculated using C factor are lower than with the rugosity index which is only a topographic factor. With very high resolution DEM, it permits to represent the fine topography. For the C factor, the zones up to very well disconnected areas (grass strips, wood…) are well represented with lower index values than downstream zones. On the contrary, areas up to very well connected zones (roads, paths…) are higher and much more connected than downstream areas. For the Manning's factor, the values are very low and not very well contrasted. This factor is not enough discriminant for this study. In conclusion, high resolution DEM (1 meter or higher) is needed for the IC calculation (precison, impact of connections…). Very high resolution permits to identify very well disconnected areas but it multiplies the calculation time. For the weighting factor, rugosity index and C factor have each some advantages. It is planned to test other approaches for the IC calculation. Key-words: hydrological connectivity index, DEM, resolution, weighting factor, comparison

  9. Study of Saturn Electrostatic Discharges in a Wide Range of Timec SCALES

    NASA Astrophysics Data System (ADS)

    Mylostna, K.; Zakharenko, V.; Konovalenko, A.; Kolyadin, V.; Zarka, P.; Griemeier, J.-M.; Litvinenko, G.; Sidorchuk, M.; Rucker, H.; Fischer, G.; Cecconi, B.; Coffre, A.; Denis, L.; Nikolaenko, V.; Shevchenko, V.

    Saturn Electrostatic discharges (SED) are sporadic broadband impulsive radio bursts associated with lightning in Saturnian atmosphere. After 25 years of space investigations in 2006 the first successful observations of SED on the UTR-2 radio telescope were carried out [1]. Since 2007 a long-term program of ED search and study in the Solar system has started. As a part of this program the unique observations with high time resolution were taken in 2010. New possibilities of UTR-2 radio telescope allowed to provide a long-period observations and study with high temporal resolution. This article presents the results of SED study in a wide range of time scales: from seconds to microseconds. For the first time there were obtained a low frequency spectrum of SED. We calculated flux densities of individual bursts at the maximum achievable time resolution. Flux densities of most intensive bursts reach 4200 Jy.

  10. Quantifying the effect of 3D spatial resolution on the accuracy of microstructural distributions

    NASA Astrophysics Data System (ADS)

    Loughnane, Gregory; Groeber, Michael; Uchic, Michael; Riley, Matthew; Shah, Megna; Srinivasan, Raghavan; Grandhi, Ramana

    The choice of spatial resolution for experimentally-collected 3D microstructural data is often governed by general rules of thumb. For example, serial section experiments often strive to collect at least ten sections through the average feature-of-interest. However, the desire to collect high resolution data in 3D is greatly tempered by the exponential growth in collection times and data storage requirements. This paper explores the use of systematic down-sampling of synthetically-generated grain microstructures to examine the effect of resolution on the calculated distributions of microstructural descriptors such as grain size, number of nearest neighbors, aspect ratio, and Ω3.

  11. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qiong; Gluch, Jürgen; Krüger, Peter

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less

  12. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  13. Simulation and Modeling of charge particles transport using SIMION for our Time of Flight Positron Annihilation Induce Auger Electron Spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander

    2012-02-01

    Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.

  14. Extracting Micro-Doppler Radar Signatures from Rotating Targets Using Fourier-Bessel Transform and Time-Frequency Analysis

    DTIC Science & Technology

    2014-10-16

    Time-Frequency analysis, Short-Time Fourier Transform, Wigner Ville Distribution, Fourier Bessel Transform, Fractional Fourier Transform. I...INTRODUCTION Most widely used time-frequency transforms are short-time Fourier Transform (STFT) and Wigner Ville distribution (WVD). In STFT, time and...frequency resolutions are limited by the size of window function used in calculating STFT. For mono-component signals, WVD gives the best time and frequency

  15. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  16. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  17. Building Change Detection in Very High Resolution Satellite Stereo Image Time Series

    NASA Astrophysics Data System (ADS)

    Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.

    2016-06-01

    There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  18. Development of a Software-Defined Radar

    DTIC Science & Technology

    2017-10-01

    waveform to the widest available (unoccupied) instantaneous bandwidth in real time. Consequently, the radar range resolution and target detection are...LabVIEW The matched filter range profile is calculated in real time using fast Fourier transform (FFT) operations to perform a cross-correlation...between the transmitted waveform and the received complex data. Figure 4 demonstrates the block logic used to achieve real -time range profile

  19. High-Resolution Genuinely Multidimensional Solution of Conservation Laws by the Space-Time Conservation Element and Solution Element Method

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Chang, Sin-Chung; Yu, Sheng-Tao; Wang, Xiao-Yen; Loh, Ching-Yuen; Jorgenson, Philip C. E.

    1999-01-01

    In this overview paper, we review the basic principles of the method of space-time conservation element and solution element for solving the conservation laws in one and two spatial dimensions. The present method is developed on the basis of local and global flux conservation in a space-time domain, in which space and time are treated in a unified manner. In contrast to the modern upwind schemes, the approach here does not use the Riemann solver and the reconstruction procedure as the building blocks. The drawbacks of the upwind approach, such as the difficulty of rationally extending the 1D scalar approach to systems of equations and particularly to multiple dimensions is here contrasted with the uniformity and ease of generalization of the Conservation Element and Solution Element (CE/SE) 1D scalar schemes to systems of equations and to multiple spatial dimensions. The assured compatibility with the simplest type of unstructured meshes, and the uniquely simple nonreflecting boundary conditions of the present method are also discussed. The present approach has yielded high-resolution shocks, rarefaction waves, acoustic waves, vortices, ZND detonation waves, and shock/acoustic waves/vortices interactions. Moreover, since no directional splitting is employed, numerical resolution of two-dimensional calculations is comparable to that of the one-dimensional calculations. Some sample applications displaying the strengths and broad applicability of the CE/SE method are reviewed.

  20. On effective and optical resolutions of diffraction data sets.

    PubMed

    Urzhumtseva, Ludmila; Klaholz, Bruno; Urzhumtsev, Alexandre

    2013-10-01

    In macromolecular X-ray crystallography, diffraction data sets are traditionally characterized by the highest resolution dhigh of the reflections that they contain. This measure is sensitive to individual reflections and does not refer to the eventual data incompleteness and anisotropy; it therefore does not describe the data well. A physically relevant and robust measure that provides a universal way to define the `actual' effective resolution deff of a data set is introduced. This measure is based on the accurate calculation of the minimum distance between two immobile point scatterers resolved as separate peaks in the Fourier map calculated with a given set of reflections. This measure is applicable to any data set, whether complete or incomplete. It also allows characterizion of the anisotropy of diffraction data sets in which deff strongly depends on the direction. Describing mathematical objects, the effective resolution deff characterizes the `geometry' of the set of measured reflections and is irrelevant to the diffraction intensities. At the same time, the diffraction intensities reflect the composition of the structure from physical entities: the atoms. The minimum distance for the atoms typical of a given structure is a measure that is different from and complementary to deff; it is also a characteristic that is complementary to conventional measures of the data-set quality. Following the previously introduced terms, this value is called the optical resolution, dopt. The optical resolution as defined here describes the separation of the atomic images in the `ideal' crystallographic Fourier map that would be calculated if the exact phases were known. The effective and optical resolution, as formally introduced in this work, are of general interest, giving a common `ruler' for all kinds of crystallographic diffraction data sets.

  1. Functional magnetic resonance imaging phase synchronization as a measure of dynamic functional connectivity.

    PubMed

    Glerean, Enrico; Salmi, Juha; Lahnakoski, Juha M; Jääskeläinen, Iiro P; Sams, Mikko

    2012-01-01

    Functional brain activity and connectivity have been studied by calculating intersubject and seed-based correlations of hemodynamic data acquired with functional magnetic resonance imaging (fMRI). To inspect temporal dynamics, these correlation measures have been calculated over sliding time windows with necessary restrictions on the length of the temporal window that compromises the temporal resolution. Here, we show that it is possible to increase temporal resolution by using instantaneous phase synchronization (PS) as a measure of dynamic (time-varying) functional connectivity. We applied PS on an fMRI dataset obtained while 12 healthy volunteers watched a feature film. Narrow frequency band (0.04-0.07 Hz) was used in the PS analysis to avoid artifactual results. We defined three metrics for computing time-varying functional connectivity and time-varying intersubject reliability based on estimation of instantaneous PS across the subjects: (1) seed-based PS, (2) intersubject PS, and (3) intersubject seed-based PS. Our findings show that these PS-based metrics yield results consistent with both seed-based correlation and intersubject correlation methods when inspected over the whole time series, but provide an important advantage of maximal single-TR temporal resolution. These metrics can be applied both in studies with complex naturalistic stimuli (e.g., watching a movie or listening to music in the MRI scanner) and more controlled (e.g., event-related or blocked design) paradigms. A MATLAB toolbox FUNPSY ( http://becs.aalto.fi/bml/software.html ) is openly available for using these metrics in fMRI data analysis.

  2. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015.

    PubMed

    Abatzoglou, John T; Dobrowski, Solomon Z; Parks, Sean A; Hegewisch, Katherine C

    2018-01-09

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  3. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    NASA Astrophysics Data System (ADS)

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  4. Fast 3D Surface Extraction 2 pages (including abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.

    Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less

  5. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  6. First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix

    NASA Astrophysics Data System (ADS)

    Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo

    2008-10-01

    The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.

  7. Validation of stationary phases in (111)In-pentetreotide planar chromatography.

    PubMed

    Moreno-Ortega, E; Mena-Bares, L M; Maza-Muret, F R; Hidalgo-Ramos, F J; Vallejo-Casas, J A

    2013-01-01

    Since Pall-German stopped manufacturing ITLC-SG, it has become necessary to validate alternative stationary phases. To validate different stationary phases versus ITLC-SG Pall-Gelman in the determination of the radiochemical purity (RCP) of (111)In-pentetreotide ((111)In-Octreoscan) by planar chromatography. We conducted a case-control study, which included 66 (111)In-pentetreotide preparations. We determined the RCP by planar chromatography, using a freshly prepared solution of 0,1M sodium citrate (pH 5) and the following stationary phases: ITLC-SG (Pall-Gelman) (reference method), iTLC-SG (Varian), HPTLC silica gel 60 (Merck), Whatman 1, Whatman 3MM and Whatman 17. For each of the methods, we calculated: PRQ, relative front values (RF) of the radiopharmaceutical and free (111)In, chromatographic development time, resolution between peaks. We compared the results obtained with the reference method. The statistical analysis was performed using the SPSS program. The p value was calculated for the study of statistical significance. The highest resolution is obtained with HPTLC silica gel 60 (Merck). However, the chromatographic development time is too long (mean=33.62minutes). Greater resolution is obtained with iTLC-SG (Varian) than with the reference method, with lower chromatographic development time (mean=3.61minutes). Very low resolutions are obtained with Whatman paper, essentially with Whatman 1 and 3MM. Therefore, we do not recommend their use. Although iTLC-SG (Varian) and HPTLC silica gel 60 (Merck) are suitable alternatives to ITLC-SG (Pall-Gelman) in determining the RCP of (111)In-pentetreotide, iTLC-SG (Varian) is the method of choice due to its lower chromatographic development time. Copyright © 2012 Elsevier España, S.L. and SEMNIM. All rights reserved.

  8. Watching proteins function with picosecond X-ray crystallography and molecular dynamics simulations.

    NASA Astrophysics Data System (ADS)

    Anfinrud, Philip

    2006-03-01

    Time-resolved electron density maps of myoglobin, a ligand-binding heme protein, have been stitched together into movies that unveil with < 2-å spatial resolution and 150-ps time-resolution the correlated protein motions that accompany and/or mediate ligand migration within the hydrophobic interior of a protein. A joint analysis of all-atom molecular dynamics (MD) calculations and picosecond time-resolved X-ray structures provides single-molecule insights into mechanisms of protein function. Ensemble-averaged MD simulations of the L29F mutant of myoglobin following ligand dissociation reproduce the direction, amplitude, and timescales of crystallographically-determined structural changes. This close agreement with experiments at comparable resolution in space and time validates the individual MD trajectories, which identify and structurally characterize a conformational switch that directs dissociated ligands to one of two nearby protein cavities. This unique combination of simulation and experiment unveils functional protein motions and illustrates at an atomic level relationships among protein structure, dynamics, and function. In collaboration with Friedrich Schotte and Gerhard Hummer, NIH.

  9. High-Frequency Focused Water-Coupled Ultrasound Used for Three-Dimensional Surface Depression Profiling

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.

    2001-01-01

    To interface with other solids, many surfaces are engineered via methods such as plating, coating, and machining to produce a functional surface ensuring successful end products. In addition, subsurface properties such as hardness, residual stress, deformation, chemical composition, and microstructure are often linked to surface characteristics. Surface topography, therefore, contains the signatures of the surface and possibly links to volumetric properties, and as a result serves as a vital link between surface design, manufacturing, and performance. Hence, surface topography can be used to diagnose, monitor, and control fabrication methods. At the NASA Glenn Research Center, the measurement of surface topography is important in developing high-temperature structural materials and for profiling the surface changes of materials during microgravity combustion experiments. A prior study demonstrated that focused air-coupled ultrasound at 1 MHz could profile surfaces with a 25-m depth resolution and a 400-m lateral resolution over a 1.4-mm depth range. In this work, we address the question of whether higher frequency focused water-coupled ultrasound can improve on these specifications. To this end, we employed 10- and 25-MHz focused ultrasonic transducers in the water-coupled mode. The surface profile results seen in this investigation for 25-MHz water-coupled ultrasound, in comparison to those for 1-MHz air-coupled ultrasound, represent an 8 times improvement in depth resolution (3 vs. 25 m seen in practice), an improvement of at least 2 times in lateral resolution (180 vs. 400 m calculated and observed in practice), and an improvement in vertical depth range of 4 times (calculated).

  10. Racial and Ethnic Differences in Patient Navigation: Results from the Patient Navigation Research Program

    PubMed Central

    Ko, Naomi Y; Snyder, Frederick R; Raich, Peter C; Paskett, Electra D.; Dudley, Donald; Lee, Ji-Hyun; Levine, Paul H.; Freund, Karen M

    2016-01-01

    Purpose Patient navigation was developed to address barriers to timely care and reduce cancer disparities. This study explores navigation and racial and ethnic differences in time to diagnostic resolution of a cancer screening abnormality. Patients and Methods We conducted an analysis of the multi-site Patient Navigation Research Program. Participants with an abnormal cancer screening test were allocated to either navigation or control. Unadjusted median time to resolution was calculated for each racial and ethnic group by navigation and control. Multivariable Cox proportional hazards models were fit, adjusting for sex, age, cancer abnormality type, and health insurance, stratifying by center of care. Results Among a sample of 7,514 participants, 29% were Non-Hispanic White, 43% Hispanic, and 28% Black. In the control group Blacks had a longer median time to diagnostic resolution (108 days) than Non-Hispanic Whites (65 days) or Hispanics (68 days) (p< .0001). In the navigated groups, Blacks had a reduction in median time to diagnostic resolution (97 days) (p <.0001). In the multivariable models, among controls, Black race was associated with increased delay to diagnostic resolution (HR=0.77; 95% CI: 0.69, 0.84) compared to the Non-Hispanic Whites, which was reduced in the navigated arm (HR=0.85; 95% CI: 0.77, 0.94). Conclusion Patient navigation had its greatest impact for Black patients who had the greatest delays in care. PMID:27227342

  11. Modeling Future Fire danger over North America in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Jain, P.; Paimazumder, D.; Done, J.; Flannigan, M.

    2016-12-01

    Fire danger ratings are used to determine wildfire potential due to weather and climate factors. The Fire Weather Index (FWI), part of the Canadian Forest Fire Danger Rating System (CFFDRS), incorporates temperature, relative humidity, windspeed and precipitation to give a daily fire danger rating that is used by wildfire management agencies in an operational context. Studies using GCM output have shown that future wildfire danger will increase in a warming climate. However, these studies are somewhat limited by the coarse spatial resolution (typically 100-400km) and temporal resolution (typically 6-hourly to monthly) of the model output. Future wildfire potential over North America based on FWI is calculated using output from the Weather, Research and Forecasting (WRF) model, which is used to downscale future climate scenarios from the bias-corrected Community Climate System Model (CCSM) under RCP8.5 scenarios at a spatial resolution of 36km. We consider five eleven year time slices: 1990-2000, 2020-2030, 2030-2040, 2050-2060 and 2080-2090. The dynamically downscaled simulation improves determination of future extreme weather by improving both spatial and temporal resolution over most GCM models. To characterize extreme fire weather we calculate annual numbers of spread days (days for which FWI > 19) and annual 99th percentile of FWI. Additionally, an extreme value analysis based on the peaks-over-threshold method allows us to calculate the return values for extreme FWI values.

  12. Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0

    NASA Astrophysics Data System (ADS)

    Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.

    2015-11-01

    The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.

  13. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations. Published by Elsevier Ltd.

  14. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  15. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  16. Lanthanum halide scintillators for time-of-flight 3-D pet

    DOEpatents

    Karp, Joel S [Glenside, PA; Surti, Suleman [Philadelphia, PA

    2008-06-03

    A Lanthanum Halide scintillator (for example LaCl.sub.3 and LaBr.sub.3) with fast decay time and good timing resolution, as well as high light output and good energy resolution, is used in the design of a PET scanner. The PET scanner includes a cavity for accepting a patient and a plurality of PET detector modules arranged in an approximately cylindrical configuration about the cavity. Each PET detector includes a Lanthanum Halide scintillator having a plurality of Lanthanum Halide crystals, a light guide, and a plurality of photomultiplier tubes arranged respectively peripherally around the cavity. The good timing resolution enables a time-of-flight (TOF) PET scanner to be developed that exhibits a reduction in noise propagation during image reconstruction and a gain in the signal-to-noise ratio. Such a PET scanner includes a time stamp circuit that records the time of receipt of gamma rays by respective PET detectors and provides timing data outputs that are provided to a processor that, in turn, calculates time-of-flight (TOF) of gamma rays through a patient in the cavity and uses the TOF of gamma rays in the reconstruction of images of the patient.

  17. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  18. Estimates of present and future flood risk in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip

    2018-03-01

    Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.

  19. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  20. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  1. Resolution of hypertension and proteinuria after preeclampsia.

    PubMed

    Berks, Durk; Steegers, Eric A P; Molas, Marek; Visser, Willy

    2009-12-01

    To estimate the time required for hypertension and proteinuria to resolve after preeclampsia, and to estimate how this time to resolution correlates with the levels of blood pressure and proteinuria during preeclampsia and prolonging pregnancy after the development of preeclampsia. This is a historic prospective cohort study of 205 preeclamptic women who were admitted between 1990 and 1992 at the Erasmus MC Medical Centre, Rotterdam, The Netherlands. Data were collected at 1.5, 3, 6, 12, 18, and 24 months after delivery. Hypertension was defined as a blood pressure 140/90 mm Hg or higher or use of antihypertensive drugs. Proteinuria was defined as 0.3 g/d or more. Resolution of hypertension and proteinuria were analyzed with the Turnbull extension to the Kaplan-Meier procedure. Correlations were calculated with an accelerated failure time model. At 3 months postpartum, 39% of women still had hypertension, which decreased to 18% at 2 years postpartum. Resolution time increased by 60% (P<.001) for every 10-mm Hg increase in maximal systolic blood pressure, 40% (P=.044) for every 10-mm Hg increase in maximal diastolic blood pressure, and 3.6% (P=.001) for every 1-day increase in the diagnosis-to-delivery interval. At 3 months postpartum, 14% still had proteinuria, which decreased to 2% at 2 years postpartum. Resolution time increased by 16% (P=.001) for every 1-g/d increase in maximal proteinuria. Gestational age at onset of preeclampsia was not correlated with resolution time of hypertension and proteinuria. The severity of preeclampsia and the time interval between diagnosis and delivery are associated with postpartum time to resolution of hypertension and proteinuria. After preeclampsia, it can take up to 2 years for hypertension and proteinuria to resolve. Therefore, the authors suggest that further invasive diagnostic tests for underlying renal disease may be postponed until 2 years postpartum. III.

  2. Isobutyl acetate: electronic state spectroscopy by high-resolution vacuum ultraviolet photoabsorption, He(I) photoelectron spectroscopy and ab initio calculations

    NASA Astrophysics Data System (ADS)

    Śmiałek, Malgorzata A.; Łabuda, Marta; Hubin-Franskin, Marie-Jeanne; Delwiche, Jacques; Hoffmann, Søren Vrønning; Jones, Nykola C.; Mason, Nigel J.; Limão-Vieira, Paulo

    2017-05-01

    The high-resolution vacuum ultraviolet photoabsorption spectrum of isobutyl acetate, C6H12O2, is presented here and was measured over the energy range 4.3-10.8 eV (290-115 nm). Valence and Rydberg transitions with their associated vibronic series have been observed in the photoabsorption spectrum and are assigned in accordance with new ab initio calculations of the vertical excitation energies and oscillator strengths. The measured photoabsorption cross sections have been used to calculate the photolysis lifetime of this ester in the Earth's upper atmosphere (20-50 km). Calculations have also been carried out to determine the ionization energies and fine structure of the lowest ionic state of isobutyl acetate and are compared with a photoelectron spectrum (from 9.5 to 16.7 eV), recorded for the first time. Vibrational structure is observed in the first photoelectron band of this molecule. Contribution to the Topical Issue: "Dynamics of Systems at the Nanoscale", edited by Andrey Solov'yov and Andrei Korol.

  3. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Verrière, M.; Dubray, N.; Schunck, N.

    2016-03-01

    We describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in N-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank-Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.

  4. Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry

    PubMed Central

    Sacco, I.C.N.; Picon, A.P.; Ribeiro, A.P.; Sartor, C.D.; Camargo-Junior, F.; Macedo, D.O.; Mori, E.T.T.; Monte, F.; Yamate, G.Y.; Neves, J.G.; Kondo, V.E.; Aliberti, S.

    2012-01-01

    The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol. PMID:22911379

  5. Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry.

    PubMed

    Sacco, I C N; Picon, A P; Ribeiro, A P; Sartor, C D; Camargo-Junior, F; Macedo, D O; Mori, E T T; Monte, F; Yamate, G Y; Neves, J G; Kondo, V E; Aliberti, S

    2012-09-01

    The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.

  6. Preliminary experience using dynamic MRI at 3.0 Tesla for evaluation of soft tissue tumors.

    PubMed

    Park, Michael Yong; Jee, Won-Hee; Kim, Sun Ki; Lee, So-Yeon; Jung, Joon-Yong

    2013-01-01

    We aimed to evaluate the use of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) at 3.0 T for differentiating the benign from malignant soft tissue tumors. Also we aimed to assess whether the shorter length of DCE-MRI protocols are adequate, and to evaluate the effect of temporal resolution. Dynamic contrast-enhanced magnetic resonance imaging, at 3.0 T with a 1 second temporal resolution in 13 patients with pathologically confirmed soft tissue tumors, was analyzed. Visual assessment of time-signal curves, subtraction images, maximal relative enhancement at the first (maximal peak enhancement [Emax]/1) and second (Emax/2) minutes, Emax, steepest slope calculated by using various time intervals (5, 30, 60 seconds), and the start of dynamic enhancement were analyzed. The 13 tumors were comprised of seven benign and six malignant soft tissue neoplasms. Washout on time-signal curves was seen on three (50%) malignant tumors and one (14%) benign one. The most discriminating DCE-MRI parameter was the steepest slope calculated, by using at 5-second intervals, followed by Emax/1 and Emax/2. All of the steepest slope values occurred within 2 minutes of the dynamic study. Start of dynamic enhancement did not show a significant difference, but no malignant tumor rendered a value greater than 14 seconds. The steepest slope and early relative enhancement have the potential for differentiating benign from malignant soft tissue tumors. Short-length rather than long-length DCE-MRI protocol may be adequate for our purpose. The steepest slope parameters require a short temporal resolution, while maximal peak enhancement parameter may be more optimal for a longer temporal resolution.

  7. Mass selectivity of dipolar resonant excitation in a linear quadrupole ion trap.

    PubMed

    Douglas, D J; Konenkov, N V

    2014-03-15

    For mass analysis, linear quadrupole ion traps operate with dipolar excitation of ions for either axial or radial ejection. There have been comparatively few computer simulations of this process. We introduce a new concept, the excitation contour, S(q), the fraction of the excited ions that reach the trap electrodes when trapped at q values near that corresponding to the excitation frequency. Ion trajectory calculations are used to calculate S(q). Ions are given Gaussian distributions of initial positions in x and y, and thermal initial velocity distributions. To model gas damping, a drag force is added to the equations of motion. The effects of the initial conditions, ejection Mathieu parameter q, scan speed, excitation voltage and collisional damping, are modeled. We find that, with no buffer gas, the mass resolution is mostly determined by the excitation time and is given by R~dβ/dq qn, where β(q) determines the oscillation frequency, and n is the number of cycles of the trapping radio frequency during the excitation or ejection time. The highest resolution at a given scan speed is reached with the lowest excitation amplitude that gives ejection. The addition of a buffer gas can increase the mass resolution. The simulation results are in broad agreement with experiments. The excitation contour, S(q), introduced here, is a useful tool for studying the ejection process. The excitation strength, excitation time and buffer gas pressure interact in a complex way but, when set properly, a mass resolution R0.5 of at least 10,000 can be obtained at a mass-to-charge ratio of 609. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Metallocorroles as inherently chiral chromophores: resolution and electronic circular dichroism spectroscopy of a tungsten biscorrole.

    PubMed

    Schies, Christine; Alemayehu, Abraham B; Vazquez-Lima, Hugo; Thomas, Kolle E; Bruhn, Torsten; Bringmann, Gerhard; Ghosh, Abhik

    2017-06-01

    An inherently chiral metallocorrole has been resolved for the first time by means of HPLC on a chiral stationary phase. For the compound in question, a homoleptic tungsten biscorrole, the absolute configurations of the enantiomers were assigned using online HPLC-ECD measurements in conjunction with time-dependent CAM-B3LYP calculations, which provided accurate simulations of the ECD spectra.

  9. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) data vegetative index calculation processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACVIN, calculates vegetative indexes numbers on limited area coverage/high resolution picture transmission data for selected IJ grid sections. The IJ grid sections were previously extracted from the full resolution data tapes and stored on disk files.

  10. Study of Plasma Waves Observed onboard Rosetta in the 67P/ChuryumovGerasimenko Comet Environment Using High Time Resolution Density Data Inferred from RPC-MIP and RPC-LAP Cross-calibration

    NASA Astrophysics Data System (ADS)

    Breuillard, H.; Henri, P.; Vallières, X.; Eriksson, A. I.; Odelstad, E.; Johansson, F. L.; Richter, I.; Goetz, C.; Wattieaux, G.; Tsurutani, B.; Hajra, R.; Le Contel, O.

    2017-12-01

    During two years, the groundbreaking ESA/Rosetta mission was able to escort comet 67P where previous cometary missions were only limited to flybys. This enabled for the first time to make in-situ measurements of the evolution of a comet's plasma environment. The density and temperature measured by Rosetta are derived from RPC-Mutual Impedance Probe (MIP) and RPC-Langmuir Probe (LAP). On one hand, low time resolution electron density are calculated using the plasma frequency extracted from the MIP mutual impedance spectra. On the other hand, high time resolution density fluctuations are estimated from the spacecraft potential measured by LAP. In this study, using a simple spacecraft charging model, we perform a cross-calibration of MIP plasma density and LAP spacecraft potential variations to obtain high time resolution measurements of the electron density. These results are also used to constrain the electron temperature. Then we make use of these new dataset, together with RPC-MAG magnetic field measurements, to investigate for the first time the compressibility and the correlations between plasma and magnetic field variations, for both singing comet waves and steepened waves observed, respectively during low and high cometary outgassing activity, in the plasma environment of comet 67P.

  11. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  12. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.

  13. Molecular dynamics at low time resolution.

    PubMed

    Faccioli, P

    2010-10-28

    The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.

  14. Large-area super-resolution optical imaging by using core-shell microfibers

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Yang; Lo, Wei-Chieh

    2017-09-01

    We first numerically and experimentally report large-area super-resolution optical imaging achieved by using core-shell microfibers. The particular spatial electromagnetic waves for different core-shell microfibers are studied by using finite-difference time-domain and ray tracing calculations. The focusing properties of photonic nanojets are evaluated in terms of intensity profile and full width at half-maximum along propagation and transversal directions. In experiment, the general optical fiber is chemically etched down to 6 μm diameter and coated with different metallic thin films by using glancing angle deposition. The direct imaging of photonic nanojets for different core-shell microfibers is performed with a scanning optical microscope system. We show that the intensity distribution of a photonic nanojet is highly related to the metallic shell due to the surface plasmon polaritons. Furthermore, large-area super-resolution optical imaging is performed by using different core-shell microfibers placed over the nano-scale grating with 150 nm line width. The core-shell microfiber-assisted imaging is achieved with super-resolution and hundreds of times the field-of-view in contrast to microspheres. The possible applications of these core-shell optical microfibers include real-time large-area micro-fluidics and nano-structure inspections.

  15. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  16. Atomic Structure and Properties of Extended Defects in Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buczko, R.; Chisholm, M.F.; Kaplan, T.

    1998-10-15

    The Z-contrast technique represents a new approach to high-resolution electron microscopy allowing for the first time incoherent imaging of materials on the atomic scale. The key advantages of the technique, an intrinsically higher resolution limit and directly interpretable, compositionally sensitive imaging, allow a new level of insight into the atomic configurations of extended defects in silicon. This experimental technique has been combined with theoretical calculations (a combination of first principles, tight binding, and classical methods) to extend this level of insight by obtaining the energetic and electronic structure of the defects.

  17. Vector magnetic field changes associated with X-class flares

    NASA Technical Reports Server (NTRS)

    Wang, Haimin; Ewell, M. W., Jr.; Zirin, H.; Ai, Guoxiang

    1994-01-01

    We present high-resolution transverse and longitudinal magnetic field measurements bracketing five X-class solar flares. We show that the magnetic shear, defined as the angular difference between the measured field and calculated potential field, actually increases after all of these flares. In each case, the shear is shown to increase along a substantial portion of the magnetic neutral line. For two of the cases, we have excellent time resolution, on the order of several minutes, and we demonstrate that the shear increase is impulsive. We briefly discuss the theoretical implications of our results.

  18. Versatile time-dependent spatial distribution model of sun glint for satellite-based ocean imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Guanhua; Xu, Wujian; Niu, Chunyue; Zhang, Kai; Ma, Zhongqi; Wang, Jiwen; Zhang, Yue

    2017-01-01

    We propose a versatile model to describe the time-dependent spatial distribution of sun glint areas in satellite-based wave water imaging. This model can be used to identify whether the imaging is affected by sun glint and how strong the glint is. The observing geometry is calculated using an accurate orbit prediction method. The Cox-Munk model is used to analyze the bidirectional reflectance of wave water surface under various conditions. The effects of whitecaps and the reflectance emerging from the sea water have been considered. Using the moderate resolution atmospheric transmission radiative transfer model, we are able to effectively calculate the sun glint distribution at the top of the atmosphere. By comparing the modeled data with the medium resolution imaging spectrometer image and Feng Yun 2E (FY-2E) image, we have proven that the time-dependent spatial distribution of sun glint areas can be effectively predicted. In addition, the main factors in determining sun glint distribution and the temporal variation rules of sun glint have been discussed. Our model can be used to design satellite orbits and should also be valuable in either eliminating sun glint or making use of it.

  19. Mass resolution of linear quadrupole ion traps with round rods.

    PubMed

    Douglas, D J; Konenkov, N V

    2014-11-15

    Auxiliary dipole excitation is widely used to eject ions from linear radio-frequency quadrupole ion traps for mass analysis. Linear quadrupoles are often constructed with round rod electrodes. The higher multipoles introduced to the electric potential by round rods might be expected to change the ion ejection process. We have therefore investigated the optimum ratio of rod radius, r, to field radius, r0, for excitation and ejection of ions. Trajectory calculations are used to determine the excitation contour, S(q), the fraction of ions ejected when trapped at q values close to the ejection (or excitation) q. Initial conditions are randomly selected from Gaussian distributions of the x and y coordinates and a thermal distribution of velocities. The N = 6 (12 pole) and N = 10 (20 pole) multipoles are added to the quadrupole potential. Peak shapes and resolution were calculated for ratios r/r0 from 1.09 to 1.20 with an excitation time of 1000 cycles of the trapping radio-frequency. Ratios r/r0 in the range 1.140 to 1.160 give the highest resolution and peaks with little tailing. Ratios outside this range give lower resolution and peaks with tails on either the low-mass side or the high-mass side of the peaks. This contrasts with the optimum ratio of 1.126-1.130 for a quadrupole mass filter operated conventionally at the tip of the first stability region. With the optimum geometry the resolution is 2.7 times greater than with an ideal quadrupole field. Adding only a 2.0% hexapole field to a quadrupole field increases the resolution by a factor of 1.6 compared with an ideal quadrupole field. Addition of a 2.0% octopole lowers resolution and degrades peak shape. With the optimum value of r/r0 , the resolution increases with the ejection time (measured in cycles of the trapping rf, n) approximately as R0.5 = 6.64n, in contrast to a pure quadrupole field where R0.5 = 1.94n. Adding weak nonlinear fields to a quadrupole field can improve the resolution with mass-selective ejection of ions by up to a factor of 2.7. The optimum ratio r/r0 is 1.14 to 1.16, which differs from the optimum ratio for a mass filter of 1.128-1.130. Copyright © 2014 John Wiley & Sons, Ltd.

  20. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE PAGES

    Regnier, D.; Verriere, M.; Dubray, N.; ...

    2015-11-30

    In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle amore » realistic calculation of fission dynamics.« less

  1. Development of a low energy electron spectrometer for SCOPE

    NASA Astrophysics Data System (ADS)

    Tominaga, Y.; Saito, Y.; Yokota, S.

    2010-12-01

    We are newly developing an electrostatic analyzer which measures low energy electrons for the future satellite mission SCOPE (cross Scale COupling in the Plasma universE). The main purpose of the SCOPE mission is to understand the cross scale coupling between macroscopic MHD scale phenom- ena and microscopic ion and electron scale phenomena. In order to understand the dynamics of plasma in such small scales, we need to observe the plasma with an analyzer which has high time resolutions. In the Earth's magnetosphere, typical timescale of plasma cyclotron frequency is ~10 sec (ions) and ~ 10 msec (electrons). In order to conduct electron-scale observations, an analyzer which has a very high time resolution(~ 10 msec) is necessary for the experiment. So far, we decided a design of the analyzer. The analyzer has three nested spherical/toroidal deflectors, which enables us to measure two different energies simultaneously and shorten the time resolution of the experiment. In order to obtain 3D velocity distribution functions of electrons, the analyzer must have 4-pi steradian field of view. We will install 8 sets of the analyzers on the satellite. Using all these analyzers we will secure 4-pi str fov at the same time. In the experiment, we plan to measure electrons from 10 eV to 22.5 keV with 32 steps. Given that the sampling time of the experiment is 0.5 msec, it takes about 8 msec to measure the whole energy range, then the time resolution of the experiment is 8 msec. The energy and angular resolution of the inner analyzer is 0.23 and 16 degrees, respectively, and that of the outer analyzer is 0.17 and 11.5 degrees, respectively. To measure enough electrons within the sampling time, the analyzer is designed to have geometrical factors (sensitivities) of 7.5e-3 (inner analyzer) and 1.0e-2 (outer analyzer) cm-2 str-1, respectively. However, it is not apparent that these characteristics of the analyzer is really appropriate for the experiment. And there are some operational problems which we have to consider and resolve. In this study, we ... 1.confirm that the analyzer we designed has characteristics appropriate for the experiment and it can measure the 3D distribution function and velocity moments of electrons. 2.estimate how the non-uniformity of the analyzer's efficiency affects the velocity moments. 3.estimate how spin motion of the satellite affects the velocity moments. Assuming Maxwellian electron distribution function with known density, bulk velocity, and temperature, we calculated the counts that the analyzer will measure taking into account the characteristic of the analyzer. Using these counts, we calculated the distribution function and velocity moments, and compared the results with the assumed density, bulk velocity and temperature in order to see the precision of the experiment. From these calculations we found that ... 1.the characteristics of the analyzer are good enough to measure the velocity moments of electrons with an error less than several percent. 2.the non-uniformity of the efficiency of the analyzers will severely affect the bulk velocity of electrons. 3.we should have special observation modes (to change the time resolution or energy range) which depends on the observation area.

  2. An economic prediction of the finer resolution level wavelet coefficients in electronic structure calculations.

    PubMed

    Nagy, Szilvia; Pipek, János

    2015-12-21

    In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.

  3. Basic Performance Test of a Prototype PET Scanner Using CdTe Semiconductor Detectors

    NASA Astrophysics Data System (ADS)

    Ueno, Y.; Morimoto, Y.; Tsuchiya, K.; Yanagita, N.; Kojima, S.; Ishitsu, T.; Kitaguchi, H.; Kubo, N.; Zhao, S.; Tamaki, N.; Amemiya, K.

    2009-02-01

    A prototype positron emission tomography (PET) scanner using CdTe semiconductor detectors was developed, and its initial evaluation was conducted. The scanner was configured to form a single detector ring with six separated detector units, each having 96 detectors arranged in three detector layers. The field of view (FOV) size was 82 mm in diameter. Basic physical performance indicators of the scanner were measured through phantom studies and confirmed by rat imaging. The system-averaged energy resolution and timing resolution were 5.4% and 6.0 ns (each in FWHM) respectively. Spatial resolution measured at FOV center was 2.6 mm FWHM. Scatter fraction was measured and calculated in a National Electrical Manufacturers Association (NEMA)-fashioned manner using a 3-mm diameter hot capillary in a water-filled 80-mm diameter acrylic cylinder. The calculated result was 3.6%. Effect of depth of interaction (DOI) measurement was demonstrated by comparing hot-rod phantom images reconstructed with and without DOI information. Finally, images of a rat myocardium and an implanted tumor were visually assessed, and the imaging performance was confirmed.

  4. Stellar Laboratories . [VI. New Mo IV - VII Oscillator Strengths and the Molybdenum Abundance in the Hot White Dwarfs G191-B2B and RE 0503-289

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Quinet, T.; Hoyer, D.; Werner, K.; Demleitner, M.; Kruk, J. W.

    2016-01-01

    For the spectral analysis of high-resolution and high signal-to-noise (SN) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Aims: To identify molybdenum lines in the ultraviolet (UV) spectra of the DA-type white dwarf G191B2B and the DO-type white dwarf RE 0503289 and, to determine their photospheric Mo abundances, reliable Mo iv-vii oscillator strengths are used. Methods: We newly calculated Mo iv-vii oscillator strengths to consider their radiative and collisional bound-bound transitions indetail in our NLTE stellar-atmosphere models for the analysis of Mo lines exhibited in high-resolution and high SN UV observations of RE 0503289.Results. We identified 12 Mo v and nine Mo vi lines in the UV spectrum of RE 0503289 and measured a photospheric Mo abundance of 1.2 3.0 104(mass fraction, 22 500 56 400 times the solar abundance). In addition, from the As v and Sn iv resonance lines,we measured mass fractions of arsenic (0.51.3 105, about 300 1200 times solar) and tin (1.33.2 104, about 14 300 35 200 times solar). For G191B2B, upper limits were determined for the abundances of Mo (5.3 107, 100 times solar) and, in addition, for Kr (1.1106, 10 times solar) and Xe (1.7107, 10 times solar). The arsenic abundance was determined (2.35.9 107, about 21 53 times solar). A new, registered German Astrophysical Virtual Observatory (GAVO) service, TOSS, has been constructed to provide weighted oscillator strengths and transition probabilities.Conclusions. Reliable measurements and calculations of atomic data are a prerequisite for stellar-atmosphere modeling. Observed Mo v-vi line profiles in the UV spectrum of the white dwarf RE 0503289 were well reproduced with our newly calculated oscillator strengths. For the first time, this allowed the photospheric Mo abundance in a white dwarf to be determined.

  5. Numerical Issues for Circulation Control Calculations

    NASA Technical Reports Server (NTRS)

    Swanson, Roy C., Jr.; Rumsey, Christopher L.

    2006-01-01

    Steady-state and time-accurate two-dimensional solutions of the compressible Reynolds-averaged Navier- Stokes equations are obtained for flow over the Lockheed circulation control (CC) airfoil and the General Aviation CC (GACC) airfoil. Numerical issues in computing circulation control flows such as the effects of grid resolution, boundary and initial conditions, and unsteadiness are addressed. For the Lockheed CC airfoil computed solutions are compared with detailed experimental data, which include velocity and Reynolds stress profiles. Three turbulence models, having either one or two transport equations, are considered. Solutions are obtained on a sequence of meshes, with mesh refinement primarily concentrated on the airfoil circular trailing edge. Several effects related to mesh refinement are identified. For example, sometimes sufficient mesh resolution can exclude nonphysical solutions, which can occur in CC airfoil calculations. Also, sensitivities of the turbulence models with mesh refinement are discussed. In the case of the GACC airfoil the focus is on the difference between steady-state and time-accurate solutions. A specific objective is to determine if there is self-excited vortex shedding from the jet slot lip.

  6. Comparison of stratospheric air parcel trajectories calculated from SSU and LIMS satellite data. [Stratospheric Sounding Unit/Limb Infrared Monitor of Stratosphere

    NASA Technical Reports Server (NTRS)

    Austin, J.

    1986-01-01

    Midstratospheric trajectories for February and March 1979 are calculated using geopotential analyses derived from limb infrared monitor of the stratosphere data. These trajectories are compared with the corresponding results using stratospheric sounding unit data. The trajectories are quasi-isentropic in that a radiation scheme is used to simply cross-isentrope flow. The results show that in disturbed conditions, quantitative agreement the trajectories, that is, within 25 great circle degrees (GCD) (one GCD about 110 km) may be valid for only 3 or 4 days, whereas during quiescent periods, quantitative agreement may last up to 10 days. By comparing trajectories calculated with different data some insight can be gained as to errors due to vertical resolution and horizontal resolution (due to infrequent sampling) in the analyzed geopotential height fields. For the disturbed trajectories described in this paper the horizontal resolution of the data was more important than vertical resolution; however, for the quiescent trajectories, which could be calculated accurately for a longer duration because of the absence of appreciable transients, the vertical resolution of the data was found to be more important than the horizontal resolution. It is speculated that these characteristics are also applicable to trajectories calculated during disturbed and quiescent periods in general. A review of some recently published trajectories shows that the qualitative conclusions of such works remains unaffected when the calculations are repeated using different data.

  7. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Monitoring the lake area changes of the Qinghai-Tibet Plateau using coarse-resolution time series remote sensing data

    NASA Astrophysics Data System (ADS)

    Ma, M.

    2015-12-01

    The Qinghai-Tibet Plateau (QTP) is the world's highest and largest plateau and is occasionally referred to as "the roof of the world". As the important "water tower", there are 1,091 lakes of more than 1.0 km2 in the QTP areas, which account for 49.4% of the total area of lakes in China. Some studies focus on the lake area changes of the QTP areas, which mainly use the middle-resolution remote sensing data (e.g. Landsat TM). In this study, the coarse-resolution time series remote sensing data, MODIS data at a spatial resolution of 250m, was used to monitor the lake area changes of the QTP areas during the last 15 years. The dataset is the MOD13Q1 and the Normal Difference Vegetation Index (NDVI) is used to identify the lake area when the NDVI is less than 0. The results show the obvious inner-annual changes of most of the lakes. Therefore the annually average and maximum lake areas are calculated based on the time series remote data, which can better quantify the change characteristics than the single scene of image data from the middle-resolution data. The results indicate that there are big spatial variances of the lake area changes in the QTB. The natural driving factors are analyzed for revealing the causes of changes.

  9. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  10. High-resolution magnetic resonance angiography of the lower extremities with a dedicated 36-element matrix coil at 3 Tesla.

    PubMed

    Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O

    2007-06-01

    Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.

  11. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  12. Real-time RT-PCR high-resolution melting curve analysis and multiplex RT-PCR to detect and differentiate grapevine leafroll-associated virus 3 variant groups I, II, III and VI.

    PubMed

    Bester, Rachelle; Jooste, Anna E C; Maree, Hans J; Burger, Johan T

    2012-09-27

    Grapevine leafroll-associated virus 3 (GLRaV-3) is the main contributing agent of leafroll disease worldwide. Four of the six GLRaV-3 variant groups known have been found in South Africa, but their individual contribution to leafroll disease is unknown. In order to study the pathogenesis of leafroll disease, a sensitive and accurate diagnostic assay is required that can detect different variant groups of GLRaV-3. In this study, a one-step real-time RT-PCR, followed by high-resolution melting (HRM) curve analysis for the simultaneous detection and identification of GLRaV-3 variants of groups I, II, III and VI, was developed. A melting point confidence interval for each variant group was calculated to include at least 90% of all melting points observed. A multiplex RT-PCR protocol was developed to these four variant groups in order to assess the efficacy of the real-time RT-PCR HRM assay. A universal primer set for GLRaV-3 targeting the heat shock protein 70 homologue (Hsp70h) gene of GLRaV-3 was designed that is able to detect GLRaV-3 variant groups I, II, III and VI and differentiate between them with high-resolution melting curve analysis. The real-time RT-PCR HRM and the multiplex RT-PCR were optimized using 121 GLRaV-3 positive samples. Due to a considerable variation in melting profile observed within each GLRaV-3 group, a confidence interval of above 90% was calculated for each variant group, based on the range and distribution of melting points. The intervals of groups I and II could not be distinguished and a 95% joint confidence interval was calculated for simultaneous detection of group I and II variants. An additional primer pair targeting GLRaV-3 ORF1a was developed that can be used in a subsequent real-time RT-PCR HRM to differentiate between variants of groups I and II. Additionally, the multiplex RT-PCR successfully validated 94.64% of the infections detected with the real-time RT-PCR HRM. The real-time RT-PCR HRM provides a sensitive, automated and rapid tool to detect and differentiate different variant groups in order to study the epidemiology of leafroll disease.

  13. MODTRAN3: Suitability as a flux-divergence code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, G.P.; Chetwynd, J.H.; Wang, J.

    1995-04-01

    The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less

  14. Interpreting high time resolution galactic cosmic ray observations in a diffusive context

    NASA Astrophysics Data System (ADS)

    Jordan, A.; Spence, H. E.; Blake, J. B.; Shaul, D. A.

    2009-12-01

    We interpret galactic cosmic ray (GCR) variations near Earth within a diffusive context. The variations occur on time-/size-scales ranging from Forbush decreases (Fds), to substructure embedded within Fds, to smaller amplitude and shorter duration variations during relatively benign interplanetary conditions. We use high time resolution GCR observations from the High Sensitivity Telescope (HIST) on Polar and from the Spectrometer for INTEGRAL (SPI) and also use solar wind plasma and magnetic field observations from ACE and/or Wind. To calculate the coefficient of diffusion, we combine these datasets with a simple convection-diffusion model for relativistic charged particles in a magnetic field. We find reasonable agreement between our and previous estimates of the coefficient. We also show whether changes in the coefficient of diffusion are sufficient to explain the above GCR variations.

  15. Electron microscopy of whole cells in liquid with nanometer resolution

    PubMed Central

    de Jonge, N.; Peckys, D. B.; Kremers, G. J.; Piston, D. W.

    2009-01-01

    Single gold-tagged epidermal growth factor (EGF) molecules bound to cellular EGF receptors of fixed fibroblast cells were imaged in liquid with a scanning transmission electron microscope (STEM). The cells were placed in buffer solution in a microfluidic device with electron transparent windows inside the vacuum of the electron microscope. A spatial resolution of 4 nm and a pixel dwell time of 20 μs were obtained. The liquid layer was sufficiently thick to contain the cells with a thickness of 7 ± 1 μm. The experimental findings are consistent with a theoretical calculation. Liquid STEM is a unique approach for imaging single molecules in whole cells with significantly improved resolution and imaging speed over existing methods. PMID:19164524

  16. How Well Can a Footpoint Tracking Method Estimate the Magnetic Helicity Influx during Flux Emergence?

    NASA Astrophysics Data System (ADS)

    Choe, Gwangson; Kim, Sunjung; Kim, Kap-Sung; No, Jincheol

    2015-08-01

    As shown by Démoulin and Berger (2003), the magnetic helicity flux through the solar surface into the solar atmosphere can be exactly calculated if we can trace the motion of footpoints with infinite temporal and spatial resolutions. When there is a magnetic flux transport across the solar surface, the horizontal velocity of footpoints becomes infinite at the polarity inversion line, although the surface integral yielding the helicity flux does not diverge. In practical application, a finite temporal and spatial resolution causes an underestimate of the magnetic helicity flux when a magnetic flux emerges from below the surface, because there is an observational blackout area near a polarity inversion line whether it is pre-existing or newly formed. In this paper, we consider emergence of simple magnetic flux ropes and calculate the supremum of the magnitude of the helicity influx that can be estimated from footpoint tracking. The results depend on the ratio of the resolvable length scale and the flux rope diameter. For a Gold-Hoyle flux rope, in which all field lines are uniformly twisted, the observationally estimated helicity influx would be about 90% of the real influx when the flux rope diameter is one hundred times the spatial resolution (for a large flux rope), and about 45% when it is ten times (for a small flux rope). For Lundquist flux ropes, the errors incurred by observational estimation are smaller than the case of the Gold-Hoyle flux rope, but could be as large as 30% of the real influx. Our calculation suggests that the error in the helicity influx estimate is at least half of the real influx or even larger when small scale magnetic structures (less than 10,000 km) emerge into the solar atmosphere.

  17. Impacts of environment on human diseases: a web service for the human exposome

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Vaartjes, Ilonca; Kamphuis, Carlijn; Strak, Maciek; Schmitz, Oliver; Soenario, Ivan; de Jong, Kor

    2017-04-01

    The exposome is the totality of human environmental exposures from conception onwards. Identifying the contribution of the exposome to human diseases and health is a key issue in health research. Examples include the effect of air pollution exposure on cardiovascular diseases, the impact of disease vectors (mosquitos) and surface hydrology exposure on malaria, and the effect of fast food restaurant exposure on obesity. Essential to health research is to disentangle the effects of the exposome and genome on health. Ultimately this requires quantifying the totality of all human exposures, for each individual in the studied human population. This poses a massive challenge to geoscientists, as environmental data are required at a high spatial and temporal resolution, with a large spatial and temporal coverage representing the area inhabited by the population studied and the time span representing several decades. Then, these data need to be combined with space-time paths of individuals to calculate personal exposures for each individual in the population. The Global and Geo Health Data Centre is taking this challenge by providing a web service capable of enriching population data with exposome information. Our web service can generate environmental information either from archived national (up to 5 m spatial and 1 h temporal resolution) and global environmental information or generated on the fly using environmental models running as microservices. On top of these environmental data services runs an individual exposure service enabling health researchers to select different spatial and temporal aggregation methods and to upload space-time paths of individuals. These are then enriched with personal exposures and eventually returned to the user. We illustrate the service in an example of individual exposures to air pollutants calculated from hyper resolution air pollution data and various approaches to estimate space-time paths of individuals.

  18. Cost reduction from resolution/improvement of carcinoid syndrome symptoms following treatment with above-standard dose of octreotide LAR.

    PubMed

    Huynh, Lynn; Totev, Todor; Vekeman, Francis; Neary, Maureen P; Duh, Mei S; Benson, Al B

    2017-09-01

    To calculate the cost reduction associated with diarrhea/flushing symptom resolution/improvement following treatment with above-standard dose octreotide-LAR from the commercial payor's perspective. Diarrhea and flushing are two major carcinoid syndrome symptoms of neuroendocrine tumor (NET). Previously, a study of NET patients from three US tertiary oncology centers (NET 3-Center Study) demonstrated that dose escalation of octreotide LAR to above-standard dose resolved/improved diarrhea/flushing in 79% of the patients within 1 year. Time course of diarrhea/flushing symptom data were collected from the NET 3-Center Study. Daily healthcare costs were calculated from a commercial claims database analysis. For the patient cohort experiencing any diarrhea/flushing symptom resolution/improvement, their observation period was divided into days of symptom resolution/improvement or no improvement, which were then multiplied by the respective daily healthcare cost and summed over 1 year to yield the blended mean annual cost per patient. For patients who experienced no diarrhea/flushing symptom improvement, mean annual daily healthcare cost of diarrhea/flushing over a 1-year period was calculated. The economic model found that 108 NET patients who experienced diarrhea/flushing symptom resolution/improvement within 1 year had statistically significantly lower mean annual healthcare cost/patient than patients with no symptom improvement, by $14,766 (p = .03). For the sub-set of 85 patients experiencing resolution/improvement of diarrhea, their cost reduction was more pronounced, at $18,740 (p = .01), statistically significantly lower than those with no improvement; outpatient costs accounted for 56% of the cost reduction (p = .02); inpatient costs, emergency department costs, and pharmacy costs accounted for the remaining 44%. The economic model relied on two different sources of data, with some heterogeneity in the prior treatment and disease status of patients. Symptom resolution/improvement of diarrhea/flushing after treatment with an above-standard dose of octreotide-LAR in NET was associated with a statistically significant healthcare cost decrease compared to a scenario of no symptom improvement.

  19. A two-parameter scintillation spectrometer system for measurement of secondary proton, deuteron, and triton distributions from materials under 558-MeV-proton irradiation

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A two-parameter scintillation spectrometer system developed and used to obtain proton, deuteron, and triton double differential cross sections from materials under 558-MeV-proton irradiation is described. The system measures both the time of flight of secondary particles over a 488-cm flight path and the energy deposited in a scintillator, 12.7 cm in diameter and 30.48 cm long. The time resolution of the system is 0.39 nsec. The calculated energy resolution based on this time resolution varies with energy from 1.6 precent to 7.75 percent for 50- and 558-MeV protons. Various systematic and statistical errors are evaluated, and the double differential cross sections for secondary proton and deutron production at 20 deg from a 2.35 g/sq cm thick beryllium target are shown as an example of the results obtainable with this system. The uncertainly in the cross sections for secondary protons varies with particle energy from approximately + or - 9 percent at 50 MeV to approximately + or - 11 percent at 558 MeV.

  20. Infrared radiation scene generation of stars and planets in celestial background

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Hong, Yaohui; Xu, Xiaojian

    2014-10-01

    An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

  1. Using force-based adaptive resolution simulations to calculate solvation free energies of amino acid sidechain analogues

    NASA Astrophysics Data System (ADS)

    Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.

    2017-06-01

    The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.

  2. Stratospheric NO and NO2 profiles at sunset from analysis of high-resolution balloon-borne infrared solar absorption spectra obtained at 33 deg N and calculations with a time-dependent photochemical model

    NASA Technical Reports Server (NTRS)

    Rinsland, C. P.; Boughner, R. E.; Larsen, J. C.; Goldman, A.; Murcray, F. J.; Murcray, D. G.

    1984-01-01

    Simultaneous stratospheric vertical profiles of NO and NO2 at sunset were derived from an analysis of infrared solar absorption spectra recorded from a float altitude of 33 km with an interferometer system during a balloon flight. A nonlinear least squares procedure was used to analyze the spectral data in regions of absorption by NO and NO2 lines. Normalized factors, determined from calculations of time dependent altitude profiles with a detailed photochemical model, were included in the onion peeling analysis to correct for the rapid diurnal changes in NO and NO2 concentrations with time near sunset. The CO2 profile was also derived from the analysis and is reported.

  3. Toward establishing a definitive Late-Mid Jurassic (M-series) Geomagnetic Polarity Reversal Time Scale through unraveling the nature of Jurassic Quiet Zone.

    NASA Astrophysics Data System (ADS)

    Tominaga, M.; Tivey, M.; Sager, W.

    2017-12-01

    Two major difficulties have hindered improving the accuracy of the Late-Mid Jurassic geomagnetic polarity time scale: a dearth of reliable high-resolution radiometric dates and the lack of a continuous Jurassic geomagnetic polarity time scale (GPTS) record. We present the latest effort towards establishing a definitive Mid Jurassic to Early Cretaceous (M-series) GPTS model using three high-resolution, multi-level (sea surface [0 km], mid-water [3 km], and near-source [5.2 km]) marine magnetic profiles from a seamount-free corridor adjacent to the Waghenaer Fracture Zone in the western Pacific Jurassic Quiet Zone (JQZ). The profiles show a global coherency in magnetic anomaly correlations between two mid ocean ridge systems (i.e., Japanese and Hawaiian lineations). Their unprecedented high data resolution documents a detailed anomaly character (i.e., amplitudes and wavelengths). We confirm that this magnetic anomaly record shows a coherent anomaly sequence from M29 back in time to M42 with previously suggested from the Japanese lineation in the Pigafetta Basin. Especially noticeable is the M39-M41 Low Amplitude Zone defined in the Pigafetta Bsin, which potentially defines the bounds of JQZ seafloor. We assessed the anomaly source with regard to the crustal architecture, including the effects of Cretaceous volcanism on crustal magnetization and conclude that the anomaly character faithfully represents changes in geomagnetic field intensity and polarity over time and is mostly free of any overprint of the original Jurassic magnetic remanence by later Cretaceous volcanism. We have constructed polarity block models (RMS <5 nT [normalized] between observed and calculated profiles) for each of the survey lines, yielding three potential GPTS candidate models with different source-to-sensor resolutions, from M19-M38, which can be compared to currently available magnetostratigraphic records. The overall polarity reversal rates calculated from each of the models are anomalously high, which is consistent with previous observations from the Japanese M-series sequence. The anomalously high reversal rates during a period of apparent low field intensity suggests a unique period of geomagnetic field behavior in Earth's history.

  4. Thermal detectors for high resolution spectroscopy

    NASA Technical Reports Server (NTRS)

    Mccammon, D.; Juda, M.; Zhang, J.; Kelley, R. L.; Moseley, S. H.; Szymkowiak, A. E.

    1986-01-01

    Cryogenic microcalorimeters can be made sensitive enough to measure the energy deposited by a single particle or X-ray photon with an accuracy of about one electron volt. It may also be possible to construct detectors of several-kilograms mass whose resolution is only a few times worse than this. Data from relatively crude test devices are in good agreement with thermal performance calculations, and a total system noise of 11 eV FWHM has been obtained for a silicon detector operating at 98 mK. Observations of 35 eV FWHM for 6-keV X-rays with a different device have been made.

  5. ON ASYMMETRY OF MAGNETIC HELICITY IN EMERGING ACTIVE REGIONS: HIGH-RESOLUTION OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian Lirong; Alexander, David; Zhu Chunming

    We employ the DAVE (differential affine velocity estimator) tracking technique on a time series of Michelson Doppler Imager (MDI)/1 minute high spatial resolution line-of-sight magnetograms to measure the photospheric flow velocity for three newly emerging bipolar active regions (ARs). We separately calculate the magnetic helicity injection rate of the leading and following polarities to confirm or refute the magnetic helicity asymmetry, found by Tian and Alexander using MDI/96 minute low spatial resolution magnetograms. Our results demonstrate that the magnetic helicity asymmetry is robust, being present in the three ARs studied, two of which have an observed balance of the magneticmore » flux. The magnetic helicity injection rate measured is found to depend little on the window size selected, but does depend on the time interval used between the two successive magnetograms being tracked. It is found that the measurement of the magnetic helicity injection rate performs well for a window size between 12 x 10 and 18 x 15 pixels and at a time interval {Delta}t = 10 minutes. Moreover, the short-lived magnetic structures, 10-60 minutes, are found to contribute 30%-50% of the magnetic helicity injection rate. Comparing with the results calculated by MDI/96 minute data, we find that the MDI/96 minute data, in general, can outline the main trend of the magnetic properties, but they significantly underestimate the magnetic flux in strong field regions and are not appropriate for quantitative tracking studies, so provide a poor estimate of the amount of magnetic helicity injected into the corona.« less

  6. Sub-millisecond electron density profile measurement at the JET tokamak with the fast lithium beam emission spectroscopy system

    NASA Astrophysics Data System (ADS)

    Réfy, D. I.; Brix, M.; Gomes, R.; Tál, B.; Zoletnik, S.; Dunai, D.; Kocsis, G.; Kálvin, S.; Szabolics, T.; JET Contributors

    2018-04-01

    Diagnostic alkali atom (e.g., lithium) beams are routinely used to diagnose magnetically confined plasmas, namely, to measure the plasma electron density profile in the edge and the scrape off layer region. A light splitting optics system was installed into the observation system of the lithium beam emission spectroscopy diagnostic at the Joint European Torus (JET) tokamak, which allows simultaneous measurement of the beam light emission with a spectrometer and a fast avalanche photodiode (APD) camera. The spectrometer measurement allows density profile reconstruction with ˜10 ms time resolution, absolute position calculation from the Doppler shift, spectral background subtraction as well as relative intensity calibration of the channels for each discharge. The APD system is capable of measuring light intensities on the microsecond time scale. However ˜100 μs integration is needed to have an acceptable signal to noise ratio due to moderate light levels. Fast modulation of the beam up to 30 kHz is implemented which allows background subtraction on the 100 μs time scale. The measurement covers the 0.9 < ρpol < 1.1 range with 6-10 mm optical resolution at the measurement location which translates to 3-5 mm radial resolution at the midplane due to flux expansion. An automated routine has been developed which performs the background subtraction, the relative calibration, and the comprehensive error calculation, runs a Bayesian density reconstruction code, and loads results to the JET database. The paper demonstrates the capability of the APD system by analyzing fast phenomena like pellet injection and edge localized modes.

  7. MRI T2 Mapping of the Knee Articular Cartilage Using Different Acquisition Sequences and Calculation Methods at 1.5 Tesla.

    PubMed

    Mars, Mokhtar; Bouaziz, Mouna; Tbini, Zeineb; Ladeb, Fethi; Gharbi, Souha

    2018-06-12

    This study aims to determine how Magnetic Resonance Imaging (MRI) acquisition techniques and calculation methods affect T2 values of knee cartilage at 1.5 Tesla and to identify sequences that can be used for high-resolution T2 mapping in short scanning times. This study was performed on phantom and twenty-nine patients who underwent MRI of the knee joint at 1.5 Tesla. The protocol includes T2 mapping sequences based on Single Echo Spin Echo (SESE), Multi-Echo Spin Echo (MESE), Fast Spin Echo (FSE) and Turbo Gradient Spin Echo (TGSE). The T2 relaxation times were quantified and evaluated using three calculation methods (MapIt, Syngo Offline and monoexponential fit). Signal to Noise Ratios (SNR) were measured in all sequences. All statistical analyses were performed using the t-test. The average T2 values in phantom were 41.7 ± 13.8 ms for SESE, 43.2 ± 14.4 ms for MESE, 42.4 ± 14.1 ms for FSE and 44 ± 14.5 ms for TGSE. In the patient study, the mean differences were 6.5 ± 8.2 ms, 7.8 ± 7.6 ms and 8.4 ± 14.2 ms for MESE, FSE and TGSE compared to SESE respectively; these statistical results were not significantly different (p > 0.05). The comparison between the three calculation methods showed no significant difference (p > 0.05). t-Test showed no significant difference between SNR values for all sequences. T2 values depend not only on the sequence type but also on the calculation method. None of the sequences revealed significant differences compared to the SESE reference sequence. TGSE with its short scanning time can be used for high-resolution T2 mapping. ©2018The Author(s). Published by S. Karger AG, Basel.

  8. Ultrafast X-Ray Absorption Spectroscopy of Isochorically Heated Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Engelhorn, Kyle Craig

    This dissertation will present a series of new tools, together with new techniques, focused on the understanding of warm and dense matter. We report on the development of a high time resolution and high detection efficiency x-ray camera. The camera is integrated with a short pulse laser and an x-ray beamline at the Advanced Light Source synchrotron. This provides an instrument for single shot, broadband x-ray absorption spectroscopy of warm and dense matter with 2 picosecond time resolution. Warm and dense matter is created by isochorically heating samples of known density with an ultrafast optical laser pulse, and X-ray absorption spectroscopy probes the unoccupied electronic density of states before the onset of hydrodynamic expansion and electron-ion equilibrium is reached. Measured spectra from a variety of materials are compared with first principle molecular dynamics and density functional theory calculations. In heated silicon dioxide spectra, two novel pre-edge features are observed, a peak below the band gap and absorption within the band gap, while a reduction was observed in the features above the edge. From consideration of the calculated spectra, the peak below the gap is attributed to valence electrons that have been promoted to the conduction band, the absorption within the gap is attributed to broken Si-O bonds, and the reduction above the edge is attributed to an elevated ionic temperature. In heated copper spectra, a time-dependent shift and broadening of the absorption edge are observed, consistent with and elevated electron temperature. The temporal evolution of the electronic temperature is accurately determined by fitting the measured spectra with calculated spectra. The electron-ion equilibration is studied with a two-temperature model. In heated nickel spectra, a shift of the absorption edge is observed. This shift is found to be inconsistent with calculated spectra and independent of incident laser fluence. A shift of the chemical potential is applied to the calculated spectra to obtain satisfactory agreement with measured spectra.

  9. MISR Level 2 TOA/Cloud Classifier parameters (MIL2TCCL_V2)

    NASA Technical Reports Server (NTRS)

    Diner, David J. (Principal Investigator)

    The TOA/Cloud Classifiers contain the Angular Signature Cloud Mask (ASCM), a scene classifier calculated using support vector machine technology (SVM) both of which are on a 1.1 km grid, and cloud fractions at 17.6 km resolution that are available in different height bins (low, middle, high) and are also calculated on an angle-by-angle basis. [Location=GLOBAL] [Temporal_Coverage: Start_Date=2000-02-24; Stop_Date=] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=17.6 km; Longitude_Resolution=17.6 km; Horizontal_Resolution_Range=10 km - < 50 km or approximately .09 degree - < .5 degree; Temporal_Resolution=about 15 orbits/day; Temporal_Resolution_Range=Daily - < Weekly, Daily - < Weekly].

  10. Depth resolution and preferential sputtering in depth profiling of sharp interfaces

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Han, Y. S.; Wang, J. Y.

    2017-07-01

    The influence of preferential sputtering on depth resolution of sputter depth profiles is studied for different sputtering rates of the two components at an A/B interface. Surface concentration and intensity depth profiles on both the sputtering time scale (as measured) and the depth scale are obtained by calculations with an extended Mixing-Roughness-Information depth (MRI)-model. The results show a clear difference for the two extreme cases (a) preponderant roughness and (b) preponderant atomic mixing. In case (a), the interface width on the time scale (Δt(16-84%)) increases with preferential sputtering if the faster sputtering component is on top of the slower sputtering component, but the true resolution on the depth scale (Δz(16-84%)) stays constant. In case (b), the interface width on the time scale stays constant but the true resolution on the depth scale varies with preferential sputtering. For similar order of magnitude of the atomic mixing and the roughness parameters, a transition state between the two extremes is obtained. While the normalized intensity profile of SIMS represents that of the surface concentration, an additional broadening effect is encountered in XPS or AES by the influence of the mean electron escape depth which may even cause an additional matrix effect at the interface.

  11. Robust estimation of pulse wave transit time using group delay.

    PubMed

    Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C

    2014-03-01

    To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.

  12. Temporal modulation transfer functions in auditory receptor fibres of the locust ( Locusta migratoria L.).

    PubMed

    Prinz, P; Ronacher, B

    2002-08-01

    The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.

  13. Development of a real-time transport performance optimization methodology

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn

    1996-01-01

    The practical application of real-time performance optimization is addressed (using a wide-body transport simulation) based on real-time measurements and calculation of incremental drag from forced response maneuvers. Various controller combinations can be envisioned although this study used symmetric outboard aileron and stabilizer. The approach is based on navigation instrumentation and other measurements found on state-of-the-art transports. This information is used to calculate winds and angle of attack. Thrust is estimated from a representative engine model as a function of measured variables. The lift and drag equations are then used to calculate lift and drag coefficients. An expression for drag coefficient, which is a function of parasite drag, induced drag, and aileron drag, is solved from forced excitation response data. Estimates of the parasite drag, curvature of the aileron drag variation, and minimum drag aileron position are produced. Minimum drag is then obtained by repositioning the symmetric aileron. Simulation results are also presented which evaluate the affects of measurement bias and resolution.

  14. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  15. Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils

    NASA Technical Reports Server (NTRS)

    Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.

    1992-01-01

    The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.

  16. MODTRAN2: Evolution and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, G.P.; Chetwynd, J.H.; Kneizys, F.X.

    1994-12-31

    MODTRAN2 is the most recent version of the Moderate Resolution Atmospheric Radiance and Transmittance Model. It encompasses all the capabilities of LOWTRAN 7, the historic 20 cm{sup {minus}1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup {minus}1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Because the band model parameters and their applications to transmittance calculations have been independently developed using equivalent width binning procedures, validation against full Voigt line-by-line calculations is important. Extensive spectralmore » comparisons have shown excellent agreement. In addition, simple timing runs of MODTRAN vs. FASCOD3P show an improvement of more than a factor of 100 for a typical 500 cm{sup {minus}1} spectral interval and comparable vertical layering. It has been previously established that not only is MODTRAN an excellent band model for full path calculations, but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies. Enhancements expected to appear in MODTRAN3 relate directly to climate change studies. The addition of ultraviolet SO{sub 2} and NO{sub 2} in the UV, along with upgraded ozone Chappuis bands in the visible will also be part of MODTRAN3.« less

  17. SU-F-T-179: Fast and Accurate Profile Acquisition for Proton Beam Using Multi-Ion Chamber Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X; Zou, J; Chen, T

    2016-06-15

    Purpose: Proton beam profile measurement is more time-consuming than photon beam. Due to the energy modulation during proton delivery, chambers have to move step-by-step instead of continuously. Multi-ion chamber arrays are appealing to this task since multiple measurements can be performed at once. However, their utilization suffers from sparse spatial resolution and potential intrinsic volume-averaging effect of the disk-shaped ion chambers. We proposed an approach to measure proton beam profiles accurately and efficiently. Methods: Mevion S250 proton system and IBA Matrixx ion chamber arrays were used in this study. Matrixx has interchamber distance of 7.62 mm, and chamber diameter ofmore » 4.5 mm. We measured the same beam profile by moving the Matrixx seven times with 1 mm each time along y axis. All 7 measurements were superimposed to get a “finer” profile with 1 mm spatial resolution. Coarser resolution profiles of 2 mm and 3 mm were also generated by using subsets of measurements. Those profiles were compared to the TPS calculated beam profile. Gamma analysis was performed for 2D dose maps to evaluate the difference to TPS dose plane. Results: Preliminary results showed a large discrepancy between the TPS calculated profile and the single measurement profile with 7.6 mm resolution. A good match could be achieved when the resolution reduced to 3 mm by adding one extra measurement. Gamma analysis for 2D dose map of a 10×10 field showed a passing rate (γ ≤ 1) of 90.6% using a 3% and 3mm criterion for single measurement, which increased to 92.3% for 2-measurement superimposition, and slightly further increased to 92.9% for 7-measurement superimposition. Conclusion: The results indicated that 2 measurements shifted by 3mm using Matrixx generated a smooth proton beam profile with good matching to Eclipse beam profile. We suggest using this 2-measurement approach in clinic for double scattering proton beam profile measurement.« less

  18. Velocities along Byrd Glacier, East Antarctica, derived from Automatic Feature Tracking

    NASA Astrophysics Data System (ADS)

    Stearns, L. A.; Hamilton, G. S.

    2003-12-01

    Automatic feature tracking techniques are applied to recently acquired ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) imagery in order to determine the velocity field of Byrd Glacier, East Antarctica. The software IMCORR tracks the displacement of surface features (crevasses, drift mounds) in time sequential images, to produce the velocity field. Due to its high resolution, ASTER imagery is ideally suited for detecting small features changes. The produced result is a dense array of velocity vectors, which allows more thorough characterization of glacier dynamics. Byrd Glacier drains approximately 20.5 km3 of ice into the Ross Ice Shelf every year. Previous studies have determined ice velocities for Byrd Glacier by using photogrammetry, field measurements and manual feature tracking. The most recent velocity data is from 1986 and, as evident in the West Antarctic ice streams, substantial changes in velocity can occur on decadal time scales. The application of ASTER-based velocities fills this time lapse, and increased temporal resolution allows for a more complete analysis of Byrd Glacier. The ASTER-derived ice velocities are used in updating mass balance and force budget calculations to assess the stability of Byrd Glacier. Ice thickness information from BEDMAP, surface slopes from the OSUDEM and a compilation of accumulation rates are used to complete the calculations.

  19. Chiral Separation of G-type Chemical Warfare Nerve Agents via Analytical Seupercritical Fluid Chromatography

    DTIC Science & Technology

    2014-01-01

    UNCLASSIFIED b . ABSTRACT UNCLASSIFIED c. THIS PAGE UNCLASSIFIED UNLIMITED 8 19b. TELEPHONE NUMBER (include area code) 410-436-4412 Standard Form 298...plates (N), retention fac- tor ( k ), separation factor (α), and resolution (RS). 16 Parameters were used to verify both the enantioselectivity and the...time, tR, was determined by averaging the time to peak max- ima from subsequent injections. Calculation of k was carried out using the following

  20. Reliability of the Parabola Approximation Method in Heart Rate Variability Analysis Using Low-Sampling-Rate Photoplethysmography.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol

    2017-10-24

    Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.

  1. Racial and ethnic differences in patient navigation: Results from the Patient Navigation Research Program.

    PubMed

    Ko, Naomi Y; Snyder, Frederick R; Raich, Peter C; Paskett, Electra D; Dudley, Donald J; Lee, Ji-Hyun; Levine, Paul H; Freund, Karen M

    2016-09-01

    Patient navigation was developed to address barriers to timely care and reduce cancer disparities. The current study explored navigation and racial and ethnic differences in time to the diagnostic resolution of a cancer screening abnormality. The authors conducted an analysis of the multisite Patient Navigation Research Program. Participants with an abnormal cancer screening test were allocated to either navigation or control. The unadjusted median time to resolution was calculated for each racial and ethnic group by navigation and control. Multivariable Cox proportional hazards models were fit, adjusting for sex, age, cancer abnormality type, and health insurance and stratifying by center of care. Among a sample of 7514 participants, 29% were non-Hispanic white, 43% were Hispanic, and 28% were black. In the control group, black individuals were found to have a longer median time to diagnostic resolution (108 days) compared with non-Hispanic white individuals (65 days) or Hispanic individuals (68 days) (P<.0001). In the navigated groups, black individuals had a reduction in the median time to diagnostic resolution (97 days) (P<.0001). In the multivariable models, among controls, black race was found to be associated with an increased delay to diagnostic resolution (hazard ratio, 0.77; 95% confidence interval, 0.69-0.84) compared with non-Hispanic white individuals, which was reduced in the navigated arm (hazard ratio, 0.85; 95% confidence interval, 0.77-0.94). Patient navigation appears to have the greatest impact among black patients, who had the greatest delays in care. Cancer 2016. © 2016 American Cancer Society. Cancer 2016;122:2715-2722. © 2016 American Cancer Society. © 2016 American Cancer Society.

  2. Calculation of spherical harmonics and Wigner d functions by FFT. Applications to fast rotational matching in molecular replacement and implementation into AMoRe.

    PubMed

    Trapani, Stefano; Navaza, Jorge

    2006-07-01

    The FFT calculation of spherical harmonics, Wigner D matrices and rotation function has been extended to all angular variables in the AMoRe molecular replacement software. The resulting code avoids singularity issues arising from recursive formulas, performs faster and produces results with at least the same accuracy as the original code. The new code aims at permitting accurate and more rapid computations at high angular resolution of the rotation function of large particles. Test calculations on the icosahedral IBDV VP2 subviral particle showed that the new code performs on the average 1.5 times faster than the original code.

  3. MISR Level 2 TOA/Cloud Classifier parameters (MIL2TCCL_V3)

    NASA Technical Reports Server (NTRS)

    Diner, David J. (Principal Investigator)

    The TOA/Cloud Classifiers contain the Angular Signature Cloud Mask (ASCM), a scene classifier calculated using support vector machine technology (SVM) both of which are on a 1.1 km grid, and cloud fractions at 17.6 km resolution that are available in different height bins (low, middle, high) and are also calculated on an angle-by-angle basis. [Temporal_Coverage: Start_Date=2000-02-24; Stop_Date=] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1.1 km; Longitude_Resolution=1.1 km; Temporal_Resolution=about 15 orbits/day].

  4. High speed FPGA-based Phasemeter for the far-infrared laser interferometers on EAST

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Liu, H.; Zou, Z.; Li, W.; Lian, H.; Jie, Y.

    2017-12-01

    The far-infrared laser-based HCN interferometer and POlarimeter/INTerferometer\\break (POINT) system are important diagnostics for plasma density measurement on EAST tokamak. Both HCN and POINT provide high spatial and temporal resolution of electron density measurement and used for plasma density feedback control. The density is calculated by measuring the real-time phase difference between the reference beams and the probe beams. For long-pulse operations on EAST, the calculation of density has to meet the requirements of Real-Time and high precision. In this paper, a Phasemeter for far-infrared laser-based interferometers will be introduced. The FPGA-based Phasemeter leverages fast ADCs to obtain the three-frequency signals from VDI planar-diode Mixers, and realizes digital filters and an FFT algorithm in FPGA to provide real-time, high precision electron density output. Implementation of the Phasemeter will be helpful for the future plasma real-time feedback control in long-pulse discharge.

  5. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  6. High temporal resolution aerosol retrieval using Geostationary Ocean Color Imager: application and initial validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yuhuan; Li, Zhengqiang; Zhang, Ying; Hou, Weizhen; Xu, Hua; Chen, Cheng; Ma, Yan

    2014-01-01

    The Geostationary Ocean Color Imager (GOCI) provides multispectral imagery of the East Asia region hourly from 9:00 to 16:00 local time (GMT+9) and collects multispectral imagery at eight spectral channels (412, 443, 490, 555, 660, 680, 745, and 865 nm) with a spatial resolution of 500 m. Thus, this technology brings significant advantages to high temporal resolution environmental monitoring. We present the retrieval of aerosol optical depth (AOD) in northern China based on GOCI data. Cross-calibration was performed against Moderate Resolution Imaging Spectrometer (MODIS) data in order to correct the land calibration bias of the GOCI sensor. AOD retrievals were then accomplished using a look-up table (LUT) strategy with assumptions of a quickly varying aerosol and a slowly varying surface with time. The AOD retrieval algorithm calculates AOD by minimizing the surface reflectance variations of a series of observations in a short period of time, such as several days. The monitoring of hourly AOD variations was implemented, and the retrieved AOD agreed well with AErosol RObotic NETwork (AERONET) ground-based measurements with a good R2 of approximately 0.74 at validation sites at the cities of Beijing and Xianghe, although intercept bias may be high in specific cases. The comparisons with MODIS products also show a good agreement in AOD spatial distribution. This work suggests that GOCI imagery can provide high temporal resolution monitoring of atmospheric aerosols over land, which is of great interest in climate change studies and environmental monitoring.

  7. Theoretical and experimental study of mirrorless fiber optics refractometer based on quasi-Gaussian approach

    NASA Astrophysics Data System (ADS)

    Abdullah, M.; Krishnan, Ganesan; Saliman, Tiffany; Fakaruddin Sidi Ahmad, M.; Bidin, Noriah

    2018-03-01

    A mirrorless refractometer was studied and analyzed using the quasi-Gaussian beam approach. The Fresnel equation for reflectivity at the interface between two mediums with different refractive indices was used to calculate the directional reflectivity, R. Various liquid samples from 1.3325 to 1.4657 refractive indices units were used. Experimentally, a fiber bundle probe with a concentric configuration of 16 receiving fibers and a single transmitting fiber was employed to verify the developed models. The sensor performance in term of sensitivity, linear range, and resolution, were analyzed and calculated. It has been shown that the developed theoretical models are capable of providing quantitative guidance of the output of the sensor with high accuracy. The highest resolution of the sensor was 4.39  ×  10-3 refractive indices units, obtained by correlating the peak voltage along the refractive index. The resolution is sufficient for determining the specific refractive index increment of most polymer solutions, certain proteins, and also in monitoring bacterial growth. The accuracy, simplicity, and effectiveness of the proposed sensor over a long period of time while having non-contact measurements reflect a good potential for commercialization.

  8. EIT image reconstruction with four dimensional regularization.

    PubMed

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  9. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    NASA Astrophysics Data System (ADS)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.

  10. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  11. Hybrid space-airborne bistatic SAR geometric resolutions

    NASA Astrophysics Data System (ADS)

    Moccia, Antonio; Renga, Alfredo

    2009-09-01

    Performance analysis of Bistatic Synthetic Aperture Radar (SAR) characterized by arbitrary geometric configurations is usually complex and time-consuming since system impulse response has to be evaluated by bistatic SAR processing. This approach does not allow derivation of general equations regulating the behaviour of image resolutions with varying the observation geometry. It is well known that for an arbitrary configuration of bistatic SAR there are not perpendicular range and azimuth directions, but the capability to produce an image is not prevented as it depends only on the possibility to generate image pixels from time delay and Doppler measurements. However, even if separately range and Doppler resolutions are good, bistatic SAR geometries can exist in which imaging capabilities are very poor when range and Doppler directions become locally parallel. The present paper aims to derive analytical tools for calculating the geometric resolutions of arbitrary configuration of bistatic SAR. The method has been applied to a hybrid bistatic Synthetic Aperture Radar formed by a spaceborne illuminator and a receiving-only airborne forward-looking Synthetic Aperture Radar (F-SAR). It can take advantage of the spaceborne illuminator to dodge the limitations of monostatic FSAR. Basic modeling and best illumination conditions have been detailed in the paper.

  12. Power cavitation-guided blood-brain barrier opening with focused ultrasound and microbubbles

    NASA Astrophysics Data System (ADS)

    Burgess, M. T.; Apostolakis, I.; Konofagou, E. E.

    2018-03-01

    Image-guided monitoring of microbubble-based focused ultrasound (FUS) therapies relies on the accurate localization of FUS-stimulated microbubble activity (i.e. acoustic cavitation). Passive cavitation imaging with ultrasound arrays can achieve this, but with insufficient spatial resolution. In this study, we address this limitation and perform high-resolution monitoring of acoustic cavitation-mediated blood-brain barrier (BBB) opening with a new technique called power cavitation imaging. By synchronizing the FUS transmit and passive receive acquisition, high-resolution passive cavitation imaging was achieved by using delay and sum beamforming with absolute time delays. Since the axial image resolution is now dependent on the duration of the received acoustic cavitation emission, short pulses of FUS were used to limit its duration. Image sets were acquired at high-frame rates for calculation of power cavitation images analogous to power Doppler imaging. Power cavitation imaging displays the mean intensity of acoustic cavitation over time and was correlated with areas of acoustic cavitation-induced BBB opening. Power cavitation-guided BBB opening with FUS could constitute a standalone system that may not require MRI guidance during the procedure. The same technique can be used for other acoustic cavitation-based FUS therapies, for both safety and guidance.

  13. Power cavitation-guided blood-brain barrier opening with focused ultrasound and microbubbles.

    PubMed

    Burgess, M T; Apostolakis, I; Konofagou, E E

    2018-03-15

    Image-guided monitoring of microbubble-based focused ultrasound (FUS) therapies relies on the accurate localization of FUS-stimulated microbubble activity (i.e. acoustic cavitation). Passive cavitation imaging with ultrasound arrays can achieve this, but with insufficient spatial resolution. In this study, we address this limitation and perform high-resolution monitoring of acoustic cavitation-mediated blood-brain barrier (BBB) opening with a new technique called power cavitation imaging. By synchronizing the FUS transmit and passive receive acquisition, high-resolution passive cavitation imaging was achieved by using delay and sum beamforming with absolute time delays. Since the axial image resolution is now dependent on the duration of the received acoustic cavitation emission, short pulses of FUS were used to limit its duration. Image sets were acquired at high-frame rates for calculation of power cavitation images analogous to power Doppler imaging. Power cavitation imaging displays the mean intensity of acoustic cavitation over time and was correlated with areas of acoustic cavitation-induced BBB opening. Power cavitation-guided BBB opening with FUS could constitute a standalone system that may not require MRI guidance during the procedure. The same technique can be used for other acoustic cavitation-based FUS therapies, for both safety and guidance.

  14. Virtual interface substructure synthesis method for normal mode analysis of super-large molecular complexes at atomic resolution.

    PubMed

    Chen, Xuehui; Sun, Yunxiang; An, Xiongbo; Ming, Dengming

    2011-10-14

    Normal mode analysis of large biomolecular complexes at atomic resolution remains challenging in computational structure biology due to the requirement of large amount of memory space and central processing unit time. In this paper, we present a method called virtual interface substructure synthesis method or VISSM to calculate approximate normal modes of large biomolecular complexes at atomic resolution. VISSM introduces the subunit interfaces as independent substructures that join contacting molecules so as to keep the integrity of the system. Compared with other approximate methods, VISSM delivers atomic modes with no need of a coarse-graining-then-projection procedure. The method was examined for 54 protein-complexes with the conventional all-atom normal mode analysis using CHARMM simulation program and the overlap of the first 100 low-frequency modes is greater than 0.7 for 49 complexes, indicating its accuracy and reliability. We then applied VISSM to the satellite panicum mosaic virus (SPMV, 78,300 atoms) and to F-actin filament structures of up to 39-mer, 228,813 atoms and found that VISSM calculations capture functionally important conformational changes accessible to these structures at atomic resolution. Our results support the idea that the dynamics of a large biomolecular complex might be understood based on the motions of its component subunits and the way in which subunits bind one another. © 2011 American Institute of Physics

  15. Presentation of a High Resolution Time Lapse 3D Groundwater Model of Metsähovi for Calculating the Gravity Effect of Groundwater in Local Scale

    NASA Astrophysics Data System (ADS)

    Hokkanen, T. M.; Hartikainen, A.; Raja-Halli, A.; Virtanen, H.; Makinen, J.

    2015-12-01

    INTRODUCTION The aim of this study is to construct a fine resolution time lapse groundwater (GW) model of Metsähovi (MH). GW, geological, and soil moisture (SM) data were collected for several years to achieve the goal. The knowledge of the behavior of the GW at local scale is essential for superconductive gravimeter (SG) investigations performing in MH. DESCRIPTION OF THE DATA Almost 50 sensors have been recorded SM data some 6 years with 1 to 5 minutes sampling frequency. The GW table has been monitored, both in bedrock and in soil, in many stages with all together 15 piezometers. Two geological sampling campaigns were conducted to get the knowledge of hydrological properties of soil in the study area of 200×200 m2 around SG station in MH. PRINCIPLE OF TIME LAPSE 3D HYDROGEOLOGICAL MODEL The model of study site consists of the surfaces of ground and bedrock gridded with 2×2 m2 resolution. The height of GW table was interpolated to 2×2×0.1 m3 grid between GW and SM monitoring points. Close to the outline of the study site and areas lacking of sensors GW table was defined by extrapolation and considering the geological information of the area. The bedrock porosity is 2% and soil porosity determined by geological information and SM recordings is from 5 to 35%. Only fully saturated media is considered in the time lapse model excluding unsaturated one. BENEFICIERS With a new model the fluctuation of GW table can be followed with ranging time lapses from 1 minute to 1 month. The gravity effect caused by the variation of GW table can be calculated more accurate than before in MH. Moreover, the new model can be validated and refined by measured gravity, i.e. hydrological model can be improved by SG recordings (Figure 1).

  16. Slowing down single-molecule trafficking through a protein nanopore reveals intermediates for peptide translocation

    NASA Astrophysics Data System (ADS)

    Mereuta, Loredana; Roy, Mahua; Asandei, Alina; Lee, Jong Kook; Park, Yoonkyung; Andricioaei, Ioan; Luchian, Tudor

    2014-01-01

    The microscopic details of how peptides translocate one at a time through nanopores are crucial determinants for transport through membrane pores and important in developing nano-technologies. To date, the translocation process has been too fast relative to the resolution of the single molecule techniques that sought to detect its milestones. Using pH-tuned single-molecule electrophysiology and molecular dynamics simulations, we demonstrate how peptide passage through the α-hemolysin protein can be sufficiently slowed down to observe intermediate single-peptide sub-states associated to distinct structural milestones along the pore, and how to control residence time, direction and the sequence of spatio-temporal state-to-state dynamics of a single peptide. Molecular dynamics simulations of peptide translocation reveal the time- dependent ordering of intermediate structures of the translocating peptide inside the pore at atomic resolution. Calculations of the expected current ratios of the different pore-blocking microstates and their time sequencing are in accord with the recorded current traces.

  17. Neutron spectroscopy of magnesium dihydride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolesnikov, Alexander I; Antonov, Vladimir E.; Efimchenko, V. S.

    2011-01-01

    Inelastic neutron scattering spectra of -MgH2 powder have been measured at T = 7 K with an energy resolution better than 1.5% using the time-of-flight direct geometry spectrometer SEQUOIA. Based on these spectra, the density g(E) of phonon states in -MgH2 has been experimentally constructed for the fist time. Comparing the available experimental data on the heat capacity of -MgH2 with those calculated using the obtained g(E) spectrum confirmed the good accuracy of its determination.

  18. The image acquisition system design of floor grinder

    NASA Astrophysics Data System (ADS)

    Wang, Yang-jiang; Liu, Wei; Liu, Hui-qin

    2018-01-01

    Based on linear CCD, high resolution image real-time acquisition system serves as designing a set of image acquisition system for floor grinder through the calculation of optical imaging system. The entire image acquisition system can collect images of ground before and after the work of the floor grinder, and the data is transmitted through the Bluetooth system to the computer and compared to realize real-time monitoring of its working condition. The system provides technical support for the design of unmanned ground grinders.

  19. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    Air pollution is one of the major concerns for human health. Associations between air pollution and health are often calculated using long-term (i.e. years to decades) information on personal exposure for each individual in a cohort. Personal exposure is the air pollution aggregated along the space-time path visited by an individual. As air pollution may vary considerably in space and time, for instance due to motorised traffic, the estimation of the spatio-temporal location of a persons' space-time path is important to identify the personal exposure. However, long term exposure is mostly calculated using the air pollution concentration at the x, y location of someone's home which does not consider that individuals are mobile (commuting, recreation, relocation). This assumption is often made as it is a major challenge to estimate space-time paths for all individuals in large cohorts, mostly because limited information on mobility of individuals is available. We address this issue by evaluating multiple approaches for the calculation of space-time paths, thereby estimating the personal exposure along these space-time paths with hyper resolution air pollution maps at national scale. This allows us to evaluate the effect of the space-time path and resulting personal exposure. Air pollution (e.g. NO2, PM10) was mapped for the entire Netherlands at a resolution of 5×5 m2 using the land use regression models developed in the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://escapeproject.eu/) and the open source software PCRaster (http://www.pcraster.eu). The models use predictor variables like population density, land use, and traffic related data sets, and are able to model spatial variation and within-city variability of annual average concentration values. We approximated space-time paths for all individuals in a cohort using various aggregations, including those representing space-time paths as the outline of a persons' home or associated parcel of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  20. High Resolution Mass Spectra Analysis with a Programmable Calculator.

    ERIC Educational Resources Information Center

    Holdsworth, David K.

    1980-01-01

    Highlighted are characteristics of programs written for a pocket-sized programmable calculator to analyze mass spectra data (such as displaying high resolution masses for formulas, predicting whether formulas are stable molecules or molecular ions, determining formulas by isotopic abundance measurement) in a laboratory or classroom. (CS)

  1. Finite-difference simulation of transonic separated flow using a full potential boundary layer interaction approach

    NASA Technical Reports Server (NTRS)

    Van Dalsem, W. R.; Steger, J. L.

    1983-01-01

    A new, fast, direct-inverse, finite-difference boundary-layer code has been developed and coupled with a full-potential transonic airfoil analysis code via new inviscid-viscous interaction algorithms. The resulting code has been used to calculate transonic separated flows. The results are in good agreement with Navier-Stokes calculations and experimental data. Solutions are obtained in considerably less computer time than Navier-Stokes solutions of equal resolution. Because efficient inviscid and viscous algorithms are used, it is expected this code will also compare favorably with other codes of its type as they become available.

  2. Temporal resolution required for accurate evaluation of the interplay effect in spot scanning proton therapy

    NASA Astrophysics Data System (ADS)

    Seo, Jeongmin; Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Kim, Chan Hyeong; Jeong, Jong Hwi; Kim, SeongHoon

    2017-04-01

    In proton therapy, the spot scanning method is known to suffer from the interplay effect induced from the independent movements of the proton beam and the organs in the patient during the treatment. To study the interplay effect, several investigators have performed four-dimensional (4D) dose calculations with some limited temporal resolutions (4 or 10 phases per respiratory cycle) by using the 4D computed tomography (CT) images of the patient; however, the validity of the limited temporal resolutions has not been confirmed. The aim of the present study is to determine whether the previous temporal resolutions (4 or 10 phases per respiratory cycle) are really high enough for adequate study of the interplay effect in spot scanning proton therapy. For this study, a series of 4D dose calculations were performed with a virtual water phantom moving in the vertical direction during dose delivery. The dose distributions were calculated for different temporal resolutions (4, 10, 25, 50, and 100 phases per respiratory cycle), and the calculated dose distributions were compared with the reference dose distribution, which was calculated using an almost continuously-moving water phantom ( i.e., 1000 phases per respiratory cycle). The results of the present study show that the temporal resolutions of 4 and 10 phases per respiratory cycle are not high enough for an accurate evaluation of the interplay effect for spot scanning proton therapy. The temporal resolution should be at least 14 and 17 phases per respiratory cycle for 10-mm and 20-mm movement amplitudes, respectively, even for rigid movement ( i.e., without deformation) of the homogeneous water phantom considered in the present study. We believe that even higher temporal resolutions are needed for an accurate evaluation of the interplay effect in the human body, in which the organs are inhomogeneous and deform during movement.

  3. A dynamic aerodynamic resistance approach to calculate high resolution sensible heat fluxes in urban areas

    NASA Astrophysics Data System (ADS)

    Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William

    2017-04-01

    Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.

  4. Power spectral estimation of high-harmonics in echoes of wall resonances to improve resolution in non-invasive measurements of wall mechanical properties in rubber tube and ex-vivo artery.

    PubMed

    Bazan, I; Ramos, A; Balay, G; Negreira, C

    2018-07-01

    The aim of this work is to develop a new type of ultrasonic analysis of the mechanical properties of an arterial wall with improved resolution, and to confirm its feasibility under laboratory conditions. it is expected that this would facilitate a non-invasive path for accurate predictive diagnosis that enables an early detection & therapy of vascular pathologies. In particular, the objective is to detect and quantify the small elasticity changes (in Young's modulus E) of arterial walls, which precede pathology. A submicron axial resolution is required for this analysis, as the periodic widening of the wall (under oscillatory arterial pressure) varies between ±10 and 20 μm. This high resolution represents less than 1% of the parietal thickness (e.g., < 7 μm in carotid arteries). The novelty of our proposal is the new technique used to estimate the modulus E of the arterial walls, which achieves the requisite resolution. It calculates the power spectral evolution associated with the temporal dynamics in higher harmonics of the wall internal resonance f 0 . This was attained via the implementation of an autoregressive parametric algorithm that accurately detects parietal echo-dynamics during a heartbeat. Thus, it was possible to measure the punctual elasticity of the wall, with a higher resolution (> an order of magnitude) compared to conventional approaches. The resolution of a typical ultrasonic image is limited to several hundred microns, and thus, such small changes are undetected. The proposed procedure provides a non-invasive and direct measure of elasticity by doing an estimation of changes in the Nf 0 harmonics and wall thickness with a resolution of 0.1%, for first time. The results obtained by using the classic temporal cross-correlation method (TCC) were compared to those obtained with the new procedure. The latter allowed the evaluation of alterations in the elastic properties of arterial walls that are 30 times smaller than those being detectable with TCC; in fact, the depth resolution of the TCC approach is limited to ≈20 μm for typical SNRs. These values were calculated based on echoes obtained using a reference pattern (rubber tube). The application of the proposed procedure was also confirmed via "ex-vivo" measurements in pig carotid segments. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Magnetic Fields in Population III Star Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turk, Matthew J.; Oishi, Jeffrey S.; Abel, Tom

    2012-02-22

    We study the buildup of magnetic fields during the formation of Population III star-forming regions, by conducting cosmological simulations from realistic initial conditions and varying the Jeans resolution. To investigate this in detail, we start simulations from identical initial conditions, mandating 16, 32 and 64 zones per Jeans length, and studied the variation in their magnetic field amplification. We find that, while compression results in some amplification, turbulent velocity fluctuations driven by the collapse can further amplify an initially weak seed field via dynamo action, provided there is sufficient numerical resolution to capture vortical motions (we find this requirement tomore » be 64 zones per Jeans length, slightly larger than, but consistent with previous work run with more idealized collapse scenarios). We explore saturation of amplification of the magnetic field, which could potentially become dynamically important in subsequent, fully-resolved calculations. We have also identified a relatively surprising phenomena that is purely hydrodynamic: the higher-resolved simulations possess substantially different characteristics, including higher infall-velocity, increased temperatures inside 1000 AU, and decreased molecular hydrogen content in the innermost region. Furthermore, we find that disk formation is suppressed in higher-resolution calculations, at least at the times that we can follow the calculation. We discuss the effect this may have on the buildup of disks over the accretion history of the first clump to form as well as the potential for gravitational instabilities to develop and induce fragmentation.« less

  6. Estimating structure quality trends in the Protein Data Bank by equivalent resolution.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Güntert, Peter

    2013-10-01

    The quality of protein structures obtained by different experimental and ab-initio calculation methods varies considerably. The methods have been evolving over time by improving both experimental designs and computational techniques, and since the primary aim of these developments is the procurement of reliable and high-quality data, better techniques resulted on average in an evolution toward higher quality structures in the Protein Data Bank (PDB). Each method leaves a specific quantitative and qualitative "trace" in the PDB entry. Certain information relevant to one method (e.g. dynamics for NMR) may be lacking for another method. Furthermore, some standard measures of quality for one method cannot be calculated for other experimental methods, e.g. crystal resolution or NMR bundle RMSD. Consequently, structures are classified in the PDB by the method used. Here we introduce a method to estimate a measure of equivalent X-ray resolution (e-resolution), expressed in units of Å, to assess the quality of any type of monomeric, single-chain protein structure, irrespective of the experimental structure determination method. We showed and compared the trends in the quality of structures in the Protein Data Bank over the last two decades for five different experimental techniques, excluding theoretical structure predictions. We observed that as new methods are introduced, they undergo a rapid method development evolution: within several years the e-resolution score becomes similar for structures obtained from the five methods and they improve from initially poor performance to acceptable quality, comparable with previously established methods, the performance of which is essentially stable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Multibeam Laser Altimeter for Planetary Topographic Mapping

    NASA Technical Reports Server (NTRS)

    Garvin, J. B.; Bufton, J. L.; Harding, D. J.

    1993-01-01

    Laser altimetry provides an active, high-resolution, high-accuracy method for measurement of planetary and asteroid surface topography. The basis of the measurement is the timing of the roundtrip propagation of short-duration pulses of laser radiation between a spacecraft and the surface. Vertical, or elevation, resolution of the altimetry measurement is determined primarily by laser pulse width, surface-induced spreading in time of the reflected pulse, and the timing precision of the altimeter electronics. With conventional gain-switched pulses from solid-state lasers and nanosecond resolution timing electronics, submeter vertical range resolution is possible anywhere from orbital altitudes of approximately 1 km to altitudes of several hundred kilometers. Horizontal resolution is a function of laser beam footprint size at the surface and the spacing between successive laser pulses. Laser divergence angle and altimeter platform height above the surface determine the laser footprint size at the surface, while laser pulse repetition rate, laser transmitter beam configuration, and altimeter platform velocity determine the spacing between successive laser pulses. Multiple laser transmitters in a single laser altimeter instrument that is orbiting above a planetary or asteroid surface could provide across-track as well as along-track coverage that can be used to construct a range image (i.e., topographic map) of the surface. We are developing a pushbroom laser altimeter instrument concept that utilizes a linear array of laser transmitters to provide contiguous across-track and along-track data. The laser technology is based on the emerging monolithic combination of individual, 1-sq cm diode-pumped Nd:YAG laser pulse emitters. Details of the multi-emitter laser transmitter technology, the instrument configuration, and performance calculations for a realistic Discovery-class mission will be presented.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Animesh; Wang, Han, E-mail: han.wang@fu-berlin.de; Site, Luigi Delle, E-mail: dellesite@fu-berlin.de

    We employ the adaptive resolution approach AdResS, in its recently developed Grand Canonical-like version (GC-AdResS) [H. Wang, C. Hartmann, C. Schütte, and L. Delle Site, Phys. Rev. X 3, 011018 (2013)], to calculate the excess chemical potential, μ{sup ex}, of various liquids and mixtures. We compare our results with those obtained from full atomistic simulations using the technique of thermodynamic integration and show a satisfactory agreement. In GC-AdResS, the procedure to calculate μ{sup ex} corresponds to the process of standard initial equilibration of the system; this implies that, independently of the specific aim of the study, μ{sup ex}, for eachmore » molecular species, is automatically calculated every time a GC-AdResS simulation is performed.« less

  9. Measuring the speed resolution of extensive air showers at the Southern Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Gesterling, Kathleen; Sarazin, Fred

    2009-10-01

    Ultra-high energy cosmic rays induce extensive air showers (EASs) in Earth's atmosphere which are assumed to propagate at the speed of light. The fluorescence detector (FD) at the Southern Pierre Auger Observatory detects the light signal from the EAS and directly measures the energy of the cosmic ray. When two or more FD sites observe an event, the geometry of the shower can be calculated independently of the velocity it is traveling. It is then possible to fit the time profile recorded in the FD using the shower speed as a free parameter. The analysis of a collection of stereo events allowed us to determine with what speed resolution we can measure EASs with sensitivity to subluminal components. Knowing the speed resolution we can look for objects propagating significantly below the speed of light.

  10. Remote sensing in support of high-resolution terrestrial carbon monitoring and modeling

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.; Zhao, M.; Dubayah, R.; Huang, C.; Swatantran, A.; ONeil-Dunne, J.; Johnson, K. D.; Birdsey, R.; Fisk, J.; Flanagan, S.; Sahajpal, R.; Huang, W.; Tang, H.; Armstrong, A. H.

    2014-12-01

    As part of its Phase 1 Carbon Monitoring System (CMS) activities, NASA initiated a Local-Scale Biomass Pilot study. The goals of the pilot study were to develop protocols for fusing high-resolution remotely sensed observations with field data, provide accurate validation test areas for the continental-scale biomass product, and demonstrate efficacy for prognostic terrestrial ecosystem modeling. In Phase 2, this effort was expanded to the state scale. Here, we present results of this activity focusing on the use of remote sensing in high-resolution ecosystem modeling. The Ecosystem Demography (ED) model was implemented at 90 m spatial resolution for the entire state of Maryland. We rasterized soil depth and soil texture data from SSURGO. For hourly meteorological data, we spatially interpolated 32-km 3-hourly NARR into 1-km hourly and further corrected them at monthly level using PRISM data. NLCD data were used to mask sand, seashore, and wetland. High-resolution 1 m forest/non-forest mapping was used to define forest fraction of 90 m cells. Three alternative strategies were evaluated for initialization of forest structure using high-resolution lidar, and the model was used to calculate statewide estimates of forest biomass, carbon sequestration potential, time to reach sequestration potential, and sensitivity to future forest growth and disturbance rates, all at 90 m resolution. To our knowledge, no dynamic ecosystem model has been run at such high spatial resolution over such large areas utilizing remote sensing and validated as extensively. There are over 3 million 90 m land cells in Maryland, greater than 43 times the ~73,000 half-degree cells in a state-of-the-art global land model.

  11. Low-light-level image super-resolution reconstruction based on iterative projection photon localization algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Changsheng; Zhao, Peng; Li, Ye

    2018-01-01

    The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.

  12. Estimate of methane emissions from oil and gas operations in the Uintah Basin using airborne measurements and Lidar wind data

    NASA Astrophysics Data System (ADS)

    Karion, A.; Sweeney, C.; Petron, G.; Frost, G. J.; Trainer, M.; Brewer, A.; Hardesty, R.; Conley, S. A.; Wolter, S.; Newberger, T.; Kofler, J.; Tans, P. P.

    2012-12-01

    During a February 2012 campaign in the Uintah oil and gas basin in northeastern Utah, thirteen research flights were conducted in conjunction with a variety of ground-based measurements. Using aircraft-based high-resolution (0.5 Hz) observations of methane (CH4) and carbon dioxide (CO2), along with High-Resolution Doppler Lidar wind observations from a ground site in the basin, we have calculated the basin-wide CH4 flux on several days. Uncertainty estimates are calculated for each day and are generally large for all but one flight day. On one day, February 3, uncertainty on the estimate from a mass balance approach is better than 30% due to ideal meteorological conditions, including a well-mixed boundary layer and low wind variability both in time and altitude, as determined from the Lidar wind observations. This aircraft-based mass balance approach to flux estimates is a critical and valuable tool for estimating CH4 emissions from oil and gas basins.

  13. The effect of flow data resolution on sediment yield estimation and channel design

    NASA Astrophysics Data System (ADS)

    Rosburg, Tyler T.; Nelson, Peter A.; Sholtes, Joel S.; Bledsoe, Brian P.

    2016-07-01

    The decision to use either daily-averaged or sub-daily streamflow records has the potential to impact the calculation of sediment transport metrics and stream channel design. Using bedload and suspended load sediment transport measurements collected at 138 sites across the United States, we calculated the effective discharge, sediment yield, and half-load discharge using sediment rating curves over long time periods (median record length = 24 years) with both daily-averaged and sub-daily streamflow records. A comparison of sediment transport metrics calculated with both daily-average and sub-daily stream flow data at each site showed that daily-averaged flow data do not adequately represent the magnitude of high stream flows at hydrologically flashy sites. Daily-average stream flow data cause an underestimation of sediment transport and sediment yield (including the half-load discharge) at flashy sites. The degree of underestimation was correlated with the level of flashiness and the exponent of the sediment rating curve. No consistent relationship between the use of either daily-average or sub-daily streamflow data and the resultant effective discharge was found. When used in channel design, computed sediment transport metrics may have errors due to flow data resolution, which can propagate into design slope calculations which, if implemented, could lead to unwanted aggradation or degradation in the design channel. This analysis illustrates the importance of using sub-daily flow data in the calculation of sediment yield in urbanizing or otherwise flashy watersheds. Furthermore, this analysis provides practical charts for estimating and correcting these types of underestimation errors commonly incurred in sediment yield calculations.

  14. The Year Leading to a Supereruption.

    PubMed

    Gualda, Guilherme A R; Sutton, Stephen R

    2016-01-01

    Supereruptions catastrophically eject 100s-1000s of km3 of magma to the surface in a matter of days to a few months. In this study, we use zoning in quartz crystals from the Bishop Tuff (California) to assess the timescales over which a giant magma body transitions from relatively quiescent, pre-eruptive crystallization to rapid decompression and eruption. Quartz crystals in the Bishop Tuff have distinctive rims (<200 μm thick), which are Ti-rich and bright in cathodoluminescence (CL) images, and which can be used to calculate Ti diffusional relaxation times. We use synchrotron-based x-ray microfluorescence to obtain quantitative Ti maps and profiles along rim-interior contacts in quartz at resolutions of 1-5 μm in each linear dimension. We perform CL imaging on a scanning electron microscope (SEM) using a low-energy (5 kV) incident beam to characterize these contacts in high resolution (<1 μm in linear dimensions). Quartz growth times were determined using a 1D model for Ti diffusion, assuming initial step functions. Minimum quartz growth rates were calculated using these calculated growth times and measured rim thicknesses. Maximum rim growth times span from ~1 min to 35 years, with a median of ~4 days. More than 70% of rim growth times are less than 1 year, showing that quartz rims have mostly grown in the days to months prior to eruption. Minimum growth rates show distinct modes between 10-8 and 10-10 m/s (depending on sample), revealing very fast crystal growth rates (100s of nm to 10s of μm per day). Our data show that quartz rims grew well within a year of eruption, with most of the growth happening in the weeks or days preceding eruption. Growth took place under conditions of high supersaturation, suggesting that rim growth marks the onset of decompression and the transition from pre-eruptive to syn-eruptive conditions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gualda, Guilherme A. R.; Sutton, Stephen R.

    Supereruptions catastrophically eject 100s-1000s of km 3 of magma to the surface in a matter of days to a few months. In this study, we use zoning in quartz crystals from the Bishop Tuff (California) to assess the timescales over which a giant magma body transitions from relatively quiescent, pre-eruptive crystallization to rapid decompression and eruption. Quartz crystals in the Bishop Tuff have distinctive rims (<200 μm thick), which are Ti-rich and bright in cathodoluminescence (CL) images, and which can be used to calculate Ti diffusional relaxation times. We use synchrotron-based x-ray microfluorescence to obtain quantitative Ti maps and profilesmore » along rim-interior contacts in quartz at resolutions of 1–5 μm in each linear dimension. We perform CL imaging on a scanning electron microscope (SEM) using a low-energy (5 kV) incident beam to characterize these contacts in high resolution (<1 μm in linear dimensions). Quartz growth times were determined using a 1D model for Ti diffusion, assuming initial step functions. Minimum quartz growth rates were calculated using these calculated growth times and measured rim thicknesses. Maximum rim growth times span from ~1 min to 35 years, with a median of ~4 days. More than 70% of rim growth times are less than 1 year, showing that quartz rims have mostly grown in the days to months prior to eruption. Minimum growth rates show distinct modes between 10 -8 and 10 -10 m/s (depending on sample), revealing very fast crystal growth rates (100s of nm to 10s of μm per day). Our data show that quartz rims grew well within a year of eruption, with most of the growth happening in the weeks or days preceding eruption. Growth took place under conditions of high supersaturation, suggesting that rim growth marks the onset of decompression and the transition from pre-eruptive to syn-eruptive conditions.« less

  16. The Year Leading to a Supereruption

    DOE PAGES

    Gualda, Guilherme A. R.; Sutton, Stephen R.

    2016-07-20

    Supereruptions catastrophically eject 100s-1000s of km 3 of magma to the surface in a matter of days to a few months. In this study, we use zoning in quartz crystals from the Bishop Tuff (California) to assess the timescales over which a giant magma body transitions from relatively quiescent, pre-eruptive crystallization to rapid decompression and eruption. Quartz crystals in the Bishop Tuff have distinctive rims (<200 μm thick), which are Ti-rich and bright in cathodoluminescence (CL) images, and which can be used to calculate Ti diffusional relaxation times. We use synchrotron-based x-ray microfluorescence to obtain quantitative Ti maps and profilesmore » along rim-interior contacts in quartz at resolutions of 1–5 μm in each linear dimension. We perform CL imaging on a scanning electron microscope (SEM) using a low-energy (5 kV) incident beam to characterize these contacts in high resolution (<1 μm in linear dimensions). Quartz growth times were determined using a 1D model for Ti diffusion, assuming initial step functions. Minimum quartz growth rates were calculated using these calculated growth times and measured rim thicknesses. Maximum rim growth times span from ~1 min to 35 years, with a median of ~4 days. More than 70% of rim growth times are less than 1 year, showing that quartz rims have mostly grown in the days to months prior to eruption. Minimum growth rates show distinct modes between 10 -8 and 10 -10 m/s (depending on sample), revealing very fast crystal growth rates (100s of nm to 10s of μm per day). Our data show that quartz rims grew well within a year of eruption, with most of the growth happening in the weeks or days preceding eruption. Growth took place under conditions of high supersaturation, suggesting that rim growth marks the onset of decompression and the transition from pre-eruptive to syn-eruptive conditions.« less

  17. Damage extraction of buildings in the 2015 Gorkha, Nepal earthquake from high-resolution SAR data

    NASA Astrophysics Data System (ADS)

    Yamazaki, Fumio; Bahri, Rendy; Liu, Wen; Sasagawa, Tadashi

    2016-05-01

    Satellite remote sensing is recognized as one of the effective tools for detecting and monitoring affected areas due to natural disasters. Since SAR sensors can capture images not only at daytime but also at nighttime and under cloud-cover conditions, they are especially useful at an emergency response period. In this study, multi-temporal high-resolution TerraSAR-X images were used for damage inspection of the Kathmandu area, which was severely affected by the April 25, 2015 Gorkha Earthquake. The SAR images obtained before and after the earthquake were utilized for calculating the difference and correlation coefficient of backscatter. The affected areas were identified by high values of the absolute difference and low values of the correlation coefficient. The post-event high-resolution optical satellite images were employed as ground truth data to verify our results. Although it was difficult to estimate the damage levels for individual buildings, the high resolution SAR images could illustrate their capability in detecting collapsed buildings at emergency response times.

  18. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    PubMed

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  19. Community Sediment Transport Model

    DTIC Science & Technology

    2007-01-01

    Woods Hole, MA 02543-1598 Phone: (508) 457-2269 Fax: (508) 457-2310 email: csherwood@usgs.gov Timothy Keen Naval Research Laboratory, Code...intended to be used as both a research tool and for practical applications. An accurate and useful model will require coupling sediment-transport with...and time steps range from seconds to minutes. We include higher-resolution sediment- transport calculation modules for research problems but, for

  20. Development of in situ time-resolved Raman spectroscopy facility for dynamic shock loading in materials

    NASA Astrophysics Data System (ADS)

    Chaurasia, S.; Rastogi, V.; Rao, U.; Sijoy, C. D.; Mishra, V.; Deo, M. N.

    2017-11-01

    The transient state of excitation and relaxation processes in materials under shock compression can be investigated by coupling the laser driven shock facility with Raman spectroscopy. For this purpose, a time resolved Raman spectroscopy setup has been developed to monitor the physical and the chemical changes such as phase transitions, chemical reactions, molecular kinetics etc., under shock compression with nanosecond time resolution. This system consist of mainly three parts, a 2 J/8 ns Nd:YAG laser system used for generation of pump and probe beams, a Raman spectrometer with temporal and spectral resolution of 1.2 ns and 3 cm-1 respectively and a target holder in confinement geometry assembly. Detailed simulation for the optimization of confinement geometry targets is performed. Time resolved measurement of polytetrafluoroethylene (PTFE) targets at focused laser intensity of 2.2 GW/cm2 has been done. The corresponding pressure in the Aluminum and PTFE are 3.6 and 1.7 GPa respectively. At 1.7 GPa in PTFE, a red shift of 5 cm-1 is observed for the CF2 twisting mode (291 cm-1). Shock velocity in PTFE is calculated by measuring rate of change of ratios of the intensity of Raman lines scattered from shocked volume to total volume of sample in the laser focal spot along the laser axis. The calculated shock velocity in PTFE is found to be 1.64 ± 0.16 km/s at shock pressure of 1.7 GPa, for present experimental conditions.

  1. A fully parallel in time and space algorithm for simulating the electrical activity of a neural tissue.

    PubMed

    Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge

    2016-01-15

    The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. microPMT-A New Photodetector for Gamma Spectrometry and Fast Timing?

    NASA Astrophysics Data System (ADS)

    Szczęśniak, T.; Grodzicka, M.; Moszyński, M.; Szawłowski, M.; Baszak, J.

    2014-10-01

    A micro photomultiplier (microPMT or μPMT) works like a classic photomultiplier but the whole device is made directly in a silicon wafer sandwiched between two glass layers. A microPMT has dimensions of only 13x10x2 mm and its photocathode has a size of 3x1 mm. The aim of the work is to check usefulness of a microPMT in gamma spectrometry with scintillators and fast timing. In the first part of the study analysis of the energy resolution obtained with 3x3x1 mm LSO, BGO and CsI(Tl) scintillators is made. The recorded values for 662 keV are equal to 22.9% and 13.5% for CsI and LSO, respectively. The light pulse shapes of a single photoelectron and scintillation signal of LSO are also shown. The important part of the study is measurement of the number of photoelectrons and estimation of the excess noise factor. Only 2200 phe/MeV were obtained for LSO coupled with the tested microPMT. The calculated excess noise factor is equal to 1.4. In the second part, measurements of the time jitter and timing resolution with LSO crystal for 511 keV annihilation quanta are reported. The timing characteristics of the tested device is poor. Its time jitter equals to 1.5 ns, whereas timing resolution for 22Na is 620 ps. All the results are compared with data obtained with classic PMTs.

  3. Development of a high resolution x-ray spectrometer for the National Ignition Facility (NIF)

    DOE PAGES

    Hill, K. W.; Bitter, M.; Delgado-Aparicio, L.; ...

    2016-09-28

    A high resolution (E/ΔE = 1200-1800) Bragg crystal x-ray spectrometer is being developed to measure plasma parameters in National Ignition Facility experiments. The instrument will be a diagnostic instrument manipulator positioned cassette designed mainly to infer electron density in compressed capsules from Stark broadening of the helium-β (1s 2-1s3p) lines of krypton and electron temperature from the relative intensities of dielectronic satellites. Two conically shaped crystals will diffract and focus (1) the Kr Heβ complex and (2) the Heα (1s 2-1s2p) and Lyα (1s-2p) complexes onto a streak camera photocathode for time resolved measurement, and a third cylindrical or conicalmore » crystal will focus the full Heα to Heβ spectral range onto an image plate to provide a time integrated calibration spectrum. Calculations of source x-ray intensity, spectrometer throughput, and spectral resolution are presented. Furthermore, details of the conical-crystal focusing properties as well as the status of the instrumental design are also presented.« less

  4. Evidence for anisotropic dielectric properties of monoclinic hafnia using valence electron energy-loss spectroscopy in high-resolution transmission electron microscopy and ab initio time-dependent density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guedj, C.; CEA, LETI, MINATEC Campus, F-38054 Grenoble; Hung, L.

    2014-12-01

    The effect of nanocrystal orientation on the energy loss spectra of monoclinic hafnia (m-HfO{sub 2}) is measured by high resolution transmission electron microscopy (HRTEM) and valence energy loss spectroscopy (VEELS) on high quality samples. For the same momentum-transfer directions, the dielectric properties are also calculated ab initio by time-dependent density-functional theory (TDDFT). Experiments and simulations evidence anisotropy in the dielectric properties of m-HfO{sub 2}, most notably with the direction-dependent oscillator strength of the main bulk plasmon. The anisotropic nature of m-HfO{sub 2} may contribute to the differences among VEELS spectra reported in literature. The good agreement between the complex dielectricmore » permittivity extracted from VEELS with nanometer spatial resolution, TDDFT modeling, and past literature demonstrates that the present HRTEM-VEELS device-oriented methodology is a possible solution to the difficult nanocharacterization challenges given in the International Technology Roadmap for Semiconductors.« less

  5. Development of a high resolution x-ray spectrometer for the National Ignition Facility (NIF).

    PubMed

    Hill, K W; Bitter, M; Delgado-Aparicio, L; Efthimion, P C; Ellis, R; Gao, L; Maddox, J; Pablant, N A; Schneider, M B; Chen, H; Ayers, S; Kauffman, R L; MacPhee, A G; Beiersdorfer, P; Bettencourt, R; Ma, T; Nora, R C; Scott, H A; Thorn, D B; Kilkenny, J D; Nelson, D; Shoup, M; Maron, Y

    2016-11-01

    A high resolution (E/ΔE = 1200-1800) Bragg crystal x-ray spectrometer is being developed to measure plasma parameters in National Ignition Facility experiments. The instrument will be a diagnostic instrument manipulator positioned cassette designed mainly to infer electron density in compressed capsules from Stark broadening of the helium-β (1s 2 -1s3p) lines of krypton and electron temperature from the relative intensities of dielectronic satellites. Two conically shaped crystals will diffract and focus (1) the Kr Heβ complex and (2) the Heα (1s 2 -1s2p) and Lyα (1s-2p) complexes onto a streak camera photocathode for time resolved measurement, and a third cylindrical or conical crystal will focus the full Heα to Heβ spectral range onto an image plate to provide a time integrated calibration spectrum. Calculations of source x-ray intensity, spectrometer throughput, and spectral resolution are presented. Details of the conical-crystal focusing properties as well as the status of the instrumental design are also presented.

  6. High-resolution mobile optical 3D scanner with color mapping

    NASA Astrophysics Data System (ADS)

    Ramm, Roland; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-07-01

    A high-resolution mobile handheld scanning device suitable for 3D data acquisition and analysis for forensic investigations, rapid prototyping, design, quality management, and archaeology with a measurement volume of approximately 325 mm x 200 mm x 100mm and a lateral object resolution of 170 µm developed at our institute is introduced. The scanners weight is 4.4 kg with an optional color DLSR camera. The PC for measurement control and point calculation is included inside the housing. Power supply is realized by rechargeable batteries. Possible operation time is between 30 and 60 minutes. The object distance is between 400 and 500 mm, and the scan time for one 3D shot may vary between 0.1 and 0.5 seconds. The complete 3D result is obtained a few seconds after starting the scan. For higher quality 3D and color images the scanner is attachable to tripod use. Measurement objects larger than the measurement volume must be acquired partly. The different resulting datasets are merged using a suitable software module. The scanner has been successfully used in various applications.

  7. VizieR Online Data Catalog: Line list for seven target PAndAS clusters (Sakari+, 2015)

    NASA Astrophysics Data System (ADS)

    Sakari, C. M.; Venn, K. A.; Mackey, D.; Shetrone, M. D.; Dotter, A.; Ferguson, A. M. N.; Huxor, A.

    2017-11-01

    The targets were observed with the Hobby-Eberly Telescope (HET; Ramsey et al. 1998, Proc. SPIE, 3352, 34; Shetrone et al. 2007PASP..119..556S) at McDonald Observatory in Fort Davis, TX in 2011 and early 2012. The High Resolution Spectrograph (HRS; Tull 1998, Proc. SPIE, 3355, 387) was utilized with the 3-arcsec fibre and a slit width of 1 arcsec, yielding an instrumental spectral resolution of R=30000. With the 600 g/mm cross-disperser set to a central wavelength of 6302.9Å, wavelength coverages of ~5320-6290 and ~6360-7340Å were achieved in the blue and the red, respectively. The 3-arcsec fibre provided coverage of the clusters past their half-light radii; the additional sky fibres (located 10 arcsec from the central object fibre) provided simultaneous observations for sky subtraction. Exposure times were calculated to obtain a total signal-to-noise ratio (S/N)=80 (per resolution element), although not all targets received sufficient time to meet this goal. (2 data files).

  8. A closed-loop time-alignment system for baseband combining

    NASA Technical Reports Server (NTRS)

    Feria, Y.

    1994-01-01

    In baseband combining, the key element is the time alignment of the baseband signals. This article describes a closed-loop time-alignment system that estimates and adjusts the relative delay between two baseband signals received from two different antennas for the signals to be coherently combined. This system automatically determines which signal is advanced and delays it accordingly with a resolution of a sample period. The performance of the loop is analyzed, and the analysis is verified through simulation. The variance of the delay estimates and the signal-to-noise ratio degradation in the simulations agree with the theoretical calculations.

  9. Vibration-rotation alchemy in acetylene (12C2H2), ? at low vibrational excitation: from high resolution spectroscopy to fast intramolecular dynamics

    NASA Astrophysics Data System (ADS)

    Perry, David S.; Miller, Anthony; Amyay, Badr; Fayt, André; Herman, Michel

    2010-04-01

    The link between energy-resolved spectra and time-resolved dynamics is explored quantitatively for acetylene (12C2H2), ? with up to 8600 cm-1 of vibrational energy. This comparison is based on the extensive and reliable knowledge of the vibration-rotation energy levels and on the model Hamiltonian used to fit them to high precision [B. Amyay, S. Robert, M. Herman, A. Fayt, B. Raghavendra, A. Moudens, J. Thiévin, B. Rowe, and R. Georges, J. Chem. Phys. 131, 114301 (2009)]. Simulated intensity borrowing features in high resolution absorption spectra and predicted survival probabilities in intramolecular vibrational redistribution (IVR) are first investigated for the v 4 + v 5 and v 3 bright states, for J = 2, 30 and 100. The dependence of the results on the rotational quantum number and on the choice of vibrational bright state reflects the interplay of three kinds of off-diagonal resonances: anharmonic, rotational l-type, and Coriolis. The dynamical quantities used to characterize the calculated time-dependent dynamics are the dilution factor φ d, the IVR lifetime τ IVR , and the recurrence time τ rec. For the two bright states v 3 + 2v 4 and 7v 4, the collisionless dynamics for thermally averaged rotational distributions at T = 27, 270 and 500 K were calculated from the available spectroscopic data. For the 7v 4 bright state, an apparent irreversible decay of is found. In all cases, the model Hamiltonian allows a detailed calculation of the energy flow among all of the coupled zeroth-order vibration-rotation states.

  10. Theoretical modeling and evaluation of the axial resolution of the adaptive optics scanning laser ophthalmoscope.

    PubMed

    Venkateswaran, Krishnakumar; Roorda, Austin; Romero-Borja, Fernando

    2004-01-01

    We present axial resolution calculated using a mathematical model of the adaptive optics scanning laser ophthalmoscope (AOSLO). The peak intensity and the width of the axial intensity response are computed with the residual Zernike coefficients after the aberrations are corrected using adaptive optics for eight subjects and compared with the axial resolution of a diffraction-limited eye. The AOSLO currently uses a confocal pinhole that is 80 microm, or 3.48 times the width of the Airy disk radius of the collection optics, and projects to 7.41 microm on the retina. For this pinhole, the axial resolution of a diffraction-limited system is 114 microm and the computed axial resolution varies between 120 and 146 microm for the human subjects included in this study. The results of this analysis indicate that to improve axial resolution, it is best to reduce the pinhole size. The resulting reduction in detected light may demand, however, a more sophisticated adaptive optics system. The study also shows that imaging systems with large pinholes are relatively insensitive to misalignment in the lateral positioning of the confocal pinhole. However, when small pinholes are used to maximize resolution, alignment becomes critical. ( c) 2004 Society of Photo-Optical Instrumentation Engineers.

  11. Use of a 3-D Dispersion Model for Calculation of Distribution of Horse Allergen and Odor around Horse Facilities

    PubMed Central

    Haeger-Eugensson, Marie; Ferm, Martin; Elfman, Lena

    2014-01-01

    The interest in equestrian sports has increased substantially during the last decades, resulting in increased number of horse facilities around urban areas. In Sweden, new guidelines for safe distance have been decided based on the size of the horse facility (e.g., number of horses) and local conditions, such as topography and meteorology. There is therefore an increasing need to estimate dispersion of horse allergens to be used, for example, in the planning processes for new residential areas in the vicinity of horse facilities. The aim of this study was to develop a method for calculating short- and long-term emissions and dispersion of horse allergen and odor around horse facilities. First, a method was developed to estimate horse allergen and odor emissions at hourly resolution based on field measurements. Secondly, these emission factors were used to calculate concentrations of horse allergen and odor by using 3-D dispersion modeling. Results from these calculations showed that horse allergens spread up to about 200 m, after which concentration levels were very low (<2 U/m3). Approximately 10% of a study-group detected the smell of manure at 60m, while the majority—80%–90%—detected smell at 60 m or shorter distance from the manure heap. Modeling enabled horse allergen exposure concentrations to be determined with good time resolution. PMID:24690946

  12. Automated brain tissue and myelin volumetry based on quantitative MR imaging with various in-plane resolutions.

    PubMed

    Andica, C; Hagiwara, A; Hori, M; Nakazawa, M; Goto, M; Koshino, S; Kamagata, K; Kumamaru, K K; Aoki, S

    2018-05-01

    Segmented brain tissue and myelin volumes can now be automatically calculated using dedicated software (SyMRI), which is based on quantification of R 1 and R 2 relaxation rates and proton density. The aim of this study was to determine the validity of SyMRI brain tissue and myelin volumetry using various in-plane resolutions. We scanned 10 healthy subjects on a 1.5T MR scanner with in-plane resolutions of 0.8, 2.0 and 3.0mm. Two scans were performed for each resolution. The acquisition time was 7-min and 24-sec for 0.8mm, 3-min and 9-sec for 2.0mm and 1-min and 56-sec for 3.0mm resolutions. The volumes of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), non-WM/GM/CSF (NoN), brain parenchymal volume (BPV), intracranial volume (ICV) and myelin were compared between in-plane resolutions. Repeatability for each resolution was then analyzed. No significant differences in volumes measured were found between the different in-plane resolutions, except for NoN between 0.8mm and 2.0mm and between 2.0mm and 3.0mm. The repeatability error value for the WM, GM, CSF, NoN, BPV and myelin volumes relative to ICV was 0.97%, 1.01%, 0.65%, 0.86%, 1.06% and 0.25% in 0.8mm; 1.22%, 1.36%, 0.73%, 0.37%, 1.18% and 0.35% in 2.0mm and 1.18%, 1.02%, 0.96%, 0.45%, 1.36%, and 0.28% in 3.0mm resolutions. SyMRI brain tissue and myelin volumetry with low in-plane resolution and short acquisition times is robust and has a good repeatability so could be useful for follow-up studies. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. An Ultra-high Resolution Synthetic Precipitation Data for Ungauged Sites

    NASA Astrophysics Data System (ADS)

    Kim, Hong-Joong; Choi, Kyung-Min; Oh, Jai-Ho

    2018-05-01

    Despite the enormous damage caused by record heavy rainfall, the amount of precipitation in areas without observation points cannot be known precisely. One way to overcome these difficulties is to estimate meteorological data at ungauged sites. In this study, we have used observation data over Seoul city to calculate high-resolution (250-meter resolution) synthetic precipitation over a 10-year (2005-2014) period. Furthermore, three cases are analyzed by evaluating the rainfall intensity and performing statistical analysis over the 10-year period. In the case where the typhoon "Meari" passed to the west coast during 28-30 June 2011, the Pearson correlation coefficient was 0.93 for seven validation points, which implies that the temporal correlation between the observed precipitation and synthetic precipitation was very good. It can be confirmed that the time series of observation and synthetic precipitation in the period almost completely matches the observed rainfall. On June 28-29, 2011, the estimation of 10 to 30 mm h-1 of continuous strong precipitation was correct. In addition, it is shown that the synthetic precipitation closely follows the observed precipitation for all three cases. Statistical analysis of 10 years of data reveals a very high correlation coefficient between synthetic precipitation and observed rainfall (0.86). Thus, synthetic precipitation data show good agreement with the observations. Therefore, the 250-m resolution synthetic precipitation amount calculated in this study is useful as basic data in weather applications, such as urban flood detection.

  14. Initial evaluation of the Celesteion large-bore PET/CT scanner in accordance with the NEMA NU2-2012 standard and the Japanese guideline for oncology FDG PET/CT data acquisition protocol version 2.0.

    PubMed

    Kaneta, Tomohiro; Ogawa, Matsuyoshi; Motomura, Nobutoku; Iizuka, Hitoshi; Arisawa, Tetsu; Hino-Shishikura, Ayako; Yoshida, Keisuke; Inoue, Tomio

    2017-10-11

    The goal of this study was to evaluate the performance of the Celesteion positron emission tomography/computed tomography (PET/CT) scanner, which is characterized by a large-bore and time-of-flight (TOF) function, in accordance with the NEMA NU-2 2012 standard and version 2.0 of the Japanese guideline for oncology fluorodeoxyglucose PET/CT data acquisition protocol. Spatial resolution, sensitivity, count rate characteristic, scatter fraction, energy resolution, TOF timing resolution, and image quality were evaluated according to the NEMA NU-2 2012 standard. Phantom experiments were performed using 18 F-solution and an IEC body phantom of the type described in the NEMA NU-2 2012 standard. The minimum scanning time required for the detection of a 10-mm hot sphere with a 4:1 target-to-background ratio, the phantom noise equivalent count (NEC phantom ), % background variability (N 10mm ), % contrast (Q H,10mm ), and recovery coefficient (RC) were calculated according to the Japanese guideline. The measured spatial resolution ranged from 4.5- to 5-mm full width at half maximum (FWHM). The sensitivity and scatter fraction were 3.8 cps/kBq and 37.3%, respectively. The peak noise-equivalent count rate was 70 kcps in the presence of 29.6 kBq mL -1 in the phantom. The system energy resolution was 12.4% and the TOF timing resolution was 411 ps at FWHM. Minimum scanning times of 2, 7, 6, and 2 min per bed position, respectively, are recommended for visual score, noise-equivalent count (NEC) phantom , N 10mm , and the Q H,10mm to N 10mm ratio (QNR) by the Japanese guideline. The RC of a 10-mm-diameter sphere was 0.49, which exceeded the minimum recommended value. The Celesteion large-bore PET/CT system had low sensitivity and NEC, but good spatial and time resolution when compared to other PET/CT scanners. The QNR met the recommended values of the Japanese guideline even at 2 min. The Celesteion is therefore thought to provide acceptable image quality with 2 min/bed position acquisition, which is the most common scan protocol in Japan.

  15. Controller response to conflict resolution advisory

    DOT National Transportation Integrated Search

    1992-12-01

    Conflict Resolution Advisory (CRA) is an automated software aid for air traffic control specialists at air route traffic control centers (ARTCCs). CRA calculates, validates, and displays to the en route controller a single resolution for predicted se...

  16. Controller Response to Conflict Resolution Advisory

    DOT National Transportation Integrated Search

    1992-12-01

    Conflict Resolution Advisory (CRA) is an automated software aid for air traffic : control specialists at air route traffic control centers (ARTCCs). CRA calculates, : validates, and displays to the en route controller a single resolution for predicte...

  17. Controller Response to Conflict Resolution Advisory Prototype

    DOT National Transportation Integrated Search

    1991-01-01

    Conflict Resolution Advisory (CRA) is an automated software aid for air traffic : control specialists at air route traffic control centers (ARTCCs). CRA calculates, : validates, and displays to the en route controller a single resolution for predicte...

  18. New Approaches To Off-Shore Wind Energy Management Exploiting Satellite EO Data

    NASA Astrophysics Data System (ADS)

    Morelli, Marco; Masini, Andrea; Venafra, Sara; Potenza, Marco Alberto Carlo

    2013-12-01

    Wind as an energy resource has been increasingly in focus over the past decades, starting with the global oil crisis in the 1970s. The possibility of expanding wind power production to off-shore locations is attractive, especially in sites where wind levels tend to be higher and more constant. Off-shore high-potential sites for wind energy plants are currently being looked up by means of wind atlases, which are essentially based on NWP (Numerical Weather Prediction) archive data and that supply information with low spatial resolution and very low accuracy. Moreover, real-time monitoring of active off- shore wind plants is being carried out using in-situ installed anemometers, that are not very reliable (especially on long time periods) and that should be periodically substituted when malfunctions or damages occur. These activities could be greatly supported exploiting archived and near real-time satellite imagery, that could provide accurate, global coverage and high spatial resolution information about both averaged and near real-time off-shore windiness. In this work we present new methodologies aimed to support both planning and near-real-time monitoring of off-shore wind energy plants using satellite SAR(Synthetic Aperture Radar) imagery. Such methodologies are currently being developed in the scope of SATENERG, a research project funded by ASI (Italian Space Agency). SAR wind data are derived from radar backscattering using empirical geophysical model functions, thus achieving greater accuracy and greater resolution with respect to other wind measurement methods. In detail, we calculate wind speed from X-band and C- band satellite SAR data, such as Cosmo-SkyMed (XMOD2) and ERS and ENVISAT (CMOD4) respectively. Then, using also detailed models of each part of the wind plant, we are able to calculate the AC power yield expected behavior, which can be used to support either the design of potential plants (using historical series of satellite images) or the monitoring and performance analysis of active plants (using near- real-time satellite imagery). We have applied these methods in several test cases and obtained successful results in comparison with standard methodologies.

  19. CALCULATION OF GAMMA SPECTRA IN A PLASTIC SCINTILLATOR FOR ENERGY CALIBRATIONAND DOSE COMPUTATION.

    PubMed

    Kim, Chankyu; Yoo, Hyunjun; Kim, Yewon; Moon, Myungkook; Kim, Jong Yul; Kang, Dong Uk; Lee, Daehee; Kim, Myung Soo; Cho, Minsik; Lee, Eunjoong; Cho, Gyuseong

    2016-09-01

    Plastic scintillation detectors have practical advantages in the field of dosimetry. Energy calibration of measured gamma spectra is important for dose computation, but it is not simple in the plastic scintillators because of their different characteristics and a finite resolution. In this study, the gamma spectra in a polystyrene scintillator were calculated for the energy calibration and dose computation. Based on the relationship between the energy resolution and estimated energy broadening effect in the calculated spectra, the gamma spectra were simply calculated without many iterations. The calculated spectra were in agreement with the calculation by an existing method and measurements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Effects of finite spatial resolution on quantitative CBF images from dynamic PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phelps, M.E.; Huang, S.C.; Mahoney, D.K.

    1985-05-01

    The finite spatial resolution of PET causes the time-activity responses on pixels around the boundaries between gray and white matter regions to contain kinetic components from tissues of different CBF's. CBF values estimated from kinetics of such mixtures are underestimated because of the nonlinear relationship between the time-activity response and the estimated CBF. Computer simulation is used to investigate these effects on phantoms of circular structures and realistic brain slice in terms of object size and quantitative CBF values. The CBF image calculated is compared to the case of having resolution loss alone. Results show that the size of amore » high flow region in the CBF image is decreased while that of a low flow region is increased. For brain phantoms, the qualitative appearance of CBF images is not seriously affected, but the estimated CBF's are underestimated by 11 to 16 percent in local gray matter regions (of size 1 cm/sup 2/) with about 14 percent reduction in global CBF over the whole slice. It is concluded that the combined effect of finite spatial resolution and the nonlinearity in estimating CBF from dynamic PET is quite significant and must be considered in processing and interpreting quantitative CBF images.« less

  1. Tunable, mixed-resolution modeling using library-based Monte Carlo and graphics processing units

    PubMed Central

    Mamonov, Artem B.; Lettieri, Steven; Ding, Ying; Sarver, Jessica L.; Palli, Rohith; Cunningham, Timothy F.; Saxena, Sunil; Zuckerman, Daniel M.

    2012-01-01

    Building on our recently introduced library-based Monte Carlo (LBMC) approach, we describe a flexible protocol for mixed coarse-grained (CG)/all-atom (AA) simulation of proteins and ligands. In the present implementation of LBMC, protein side chain configurations are pre-calculated and stored in libraries, while bonded interactions along the backbone are treated explicitly. Because the AA side chain coordinates are maintained at minimal run-time cost, arbitrary sites and interaction terms can be turned on to create mixed-resolution models. For example, an AA region of interest such as a binding site can be coupled to a CG model for the rest of the protein. We have additionally developed a hybrid implementation of the generalized Born/surface area (GBSA) implicit solvent model suitable for mixed-resolution models, which in turn was ported to a graphics processing unit (GPU) for faster calculation. The new software was applied to study two systems: (i) the behavior of spin labels on the B1 domain of protein G (GB1) and (ii) docking of randomly initialized estradiol configurations to the ligand binding domain of the estrogen receptor (ERα). The performance of the GPU version of the code was also benchmarked in a number of additional systems. PMID:23162384

  2. Initial test of MITA/DIMM with an operational CBP system

    NASA Astrophysics Data System (ADS)

    Baldwin, Kevin; Hanna, Randall; Brown, Andrea; Brown, David; Moyer, Steven; Hixson, Jonathan G.

    2018-05-01

    The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.

  3. An objective algorithm for reconstructing the three-dimensional ocean temperature field based on Argo profiles and SST data

    NASA Astrophysics Data System (ADS)

    Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang

    2017-12-01

    While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.

  4. Satellite-based drought monitoring in Kenya in an operational setting

    NASA Astrophysics Data System (ADS)

    Klisch, A.; Atzberger, C.; Luminari, L.

    2015-04-01

    The University of Natural Resources and Life Sciences (BOKU) in Vienna (Austria) in cooperation with the National Drought Management Authority (NDMA) in Nairobi (Kenya) has setup an operational processing chain for mapping drought occurrence and strength for the territory of Kenya using the Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI at 250 m ground resolution from 2000 onwards. The processing chain employs a modified Whittaker smoother providing consistent NDVI "Mondayimages" in near real-time (NRT) at a 7-daily updating interval. The approach constrains temporally extrapolated NDVI values based on reasonable temporal NDVI paths. Contrary to other competing approaches, the processing chain provides a modelled uncertainty range for each pixel and time step. The uncertainties are calculated by a hindcast analysis of the NRT products against an "optimum" filtering. To detect droughts, the vegetation condition index (VCI) is calculated at pixel level and is spatially aggregated to administrative units. Starting from weekly temporal resolution, the indicator is also aggregated for 1- and 3-monthly intervals considering available uncertainty information. Analysts at NDMA use the spatially/temporally aggregated VCI and basic image products for their monthly bulletins. Based on the provided bio-physical indicators as well as a number of socio-economic indicators, contingency funds are released by NDMA to sustain counties in drought conditions. The paper shows the successful application of the products within NDMA by providing a retrospective analysis applied to droughts in 2006, 2009 and 2011. Some comparisons with alternative products (e.g. FEWS NET, the Famine Early Warning Systems Network) highlight main differences.

  5. Modelled air pollution levels versus EC air quality legislation - results from high resolution simulation.

    PubMed

    Chervenkov, Hristo

    2013-12-01

    An appropriate method for evaluating the air quality of a certain area is to contrast the actual air pollution levels to the critical ones, prescribed in the legislative standards. The application of numerical simulation models for assessing the real air quality status is allowed by the legislation of the European Community (EC). This approach is preferable, especially when the area of interest is relatively big and/or the network of measurement stations is sparse, and the available observational data are scarce, respectively. Such method is very efficient for similar assessment studies due to continuous spatio-temporal coverage of the obtained results. In the study the values of the concentration of the harmful substances sulphur dioxide, (SO2), nitrogen dioxide (NO2), particulate matter - coarse (PM10) and fine (PM2.5) fraction, ozone (O3), carbon monoxide (CO) and ammonia (NH3) in the surface layer obtained from modelling simulations with resolution 10 km on hourly bases are taken to calculate the necessary statistical quantities which are used for comparison with the corresponding critical levels, prescribed in the EC directives. For part of them (PM2.5, CO and NH3) this is done for first time with such resolution. The computational grid covers Bulgaria entirely and some surrounding territories and the calculations are made for every year in the period 1991-2000. The averaged over the whole time slice results can be treated as representative for the air quality situation of the last decade of the former century.

  6. Experimental and rendering-based investigation of laser radar cross sections of small unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank

    2017-12-01

    Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.

  7. A novel algorithm for monitoring reservoirs under all-weather conditions at a high temporal resolution through passive microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Gao, Huilin

    2016-08-01

    Flood mitigation in developing countries has been hindered by a lack of near real-time reservoir storage information at high temporal resolution. By leveraging satellite passive microwave observations over a reservoir and its vicinity, we present a globally applicable new algorithm to estimate reservoir storage under all-weather conditions at a 4 day time step. A weighted horizontal ratio (WHR) based on the brightness temperatures at 36.5 GHz is introduced, with its coefficients calibrated against an area training data set over each reservoir. Using a predetermined area-elevation (A-H) relationship, these coefficients are then applied to the microwave data to calculate the storage. Validation results over four reservoirs in South Asia indicate that the microwave-based storage estimations (after noise reduction) perform well (with coefficients of determination ranging from 0.41 to 0.74). This is the first time that passive microwave observations are fused with other satellite data for quantifying the storage of individual reservoirs.

  8. Probabilistic Assessment of Hypobaric Decompression Sickness Treatment Success

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Abercromby, Andrew F. J.; Dervay, Joseph P.; Feiveson, Alan H.; Gernhardt, Michael L.; Norcross, Jason R.; Ploutz-Snyder, Robert; Wessel, James H., III

    2014-01-01

    The Hypobaric Decompression Sickness (DCS) Treatment Model links a decrease in computed bubble volume from increased pressure (DeltaP), increased oxygen (O2) partial pressure, and passage of time during treatment to the probability of symptom resolution [P(symptom resolution)]. The decrease in offending volume is realized in 2 stages: a) during compression via Boyle's Law and b) during subsequent dissolution of the gas phase via the O2 window. We established an empirical model for the P(symptom resolution) while accounting for multiple symptoms within subjects. The data consisted of 154 cases of hypobaric DCS symptoms along with ancillary information from tests on 56 men and 18 women. Our best estimated model is P(symptom resolution) = 1 / (1+exp(-(ln(Delta P) - 1.510 + 0.795×AMB - 0.00308×Ts) / 0.478)), where (DeltaP) is pressure difference (psid), AMB = 1 if ambulation took place during part of the altitude exposure, otherwise AMB = 0; and where Ts is the elapsed time in mins from start of the altitude exposure to recognition of a DCS symptom. To apply this model in future scenarios, values of DeltaP as inputs to the model would be calculated from the Tissue Bubble Dynamics Model based on the effective treatment pressure: (DeltaP) = P2 - P1 | = P1×V1/V2 - P1, where V1 is the computed volume of a spherical bubble in a unit volume of tissue at low pressure P1 and V2 is computed volume after a change to a higher pressure P2. If 100% ground level O2 (GLO) was breathed in place of air, then V2 continues to decrease through time at P2 at a faster rate. This calculated value of (DeltaP then represents the effective treatment pressure at any point in time. Simulation of a "pain-only" symptom at 203 min into an ambulatory extravehicular activity (EVA) at 4.3 psia on Mars resulted in a P(symptom resolution) of 0.49 (0.36 to 0.62 95% confidence intervals) on immediate return to 8.2 psia in the Multi-Mission Space Exploration Vehicle. The P(symptom resolution) increased to near certainty (0.99) after 2 hrs of GLO at 8.2 psia or with less certainty on immediate pressurization to 14.7 psia [0.90 (0.83 - 0.95)]. Given the low probability of DCS during EVA and the prompt treatment of a symptom with guidance from the model, it is likely that the symptom and gas phase will resolve with minimum resources and minimal impact on astronaut health, safety, and productivity.

  9. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  10. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  11. Application of multi-grid method on the simulation of incremental forging processes

    NASA Astrophysics Data System (ADS)

    Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel

    2016-10-01

    Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.

  12. Calculation of Rate Spectra from Noisy Time Series Data

    PubMed Central

    Voelz, Vincent A.; Pande, Vijay S.

    2011-01-01

    As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854

  13. Scintillation properties of Gd3Al2Ga3O12:Ce3+ single crystal scintillators

    NASA Astrophysics Data System (ADS)

    Sakthong, Ongsa; Chewpraditkul, Weerapong; Wanarak, Chalerm; Kamada, Kei; Yoshikawa, Akira; Prusa, Petr; Nikl, Martin

    2014-07-01

    The scintillation properties of Gd3Al2Ga3O12:Ce3+ (GAGG:Ce) single crystals grown by the Czochralski method with 1 at% cerium in the melt were investigated and results were compared with so far published results in the literature. The light yield (LY) and energy resolution were measured using a XP5200B photomultiplier. Despite about twice higher LY for GAGG:Ce, the energy resolution is only slightly better than that of LuAG:Ce due to its worse intrinsic resolution and non-proportionality of LY. The LY dependences on the sample thickness and amplifier shaping time were measured. The estimated photofraction in pulse height spectra of 320 and 662 keV γ-rays and the total mass attenuation coefficient at 662 keV γ-rays were also determined and compared with the theoretical ones calculated using the WinXCom program.

  14. Performance Calculations for the ITER Core Imaging X-Ray Spectrometer (CIXS)

    NASA Astrophysics Data System (ADS)

    Hill, K. W.; Delgado-Aparicio, L.; Pablant, N.; Johnson, D.; Feder, R.; Klabacha, J.; Stratton, B.; Bitter, M.; Beiersdorfer, P.; Barnsley, R.; Bertschinger, G.; O'Mullane, M.; Lee, S. G.

    2013-10-01

    The US is providing a 1D imaging x-ray crystal spectrometer system as a primary diagnostic for measuring profiles of ion temperature (Ti) and toroidal flow velocity (v) in the ITER plasma core (r/a = 0-0.85). The diagnostic must provide high spectral resolution (E/ ΔE > 5,000), spatial resolution of 10 cm, and time resolution of 10-100 ms, and must operate and survive in an environment having high neutron and gamma-ray fluxes. This work presents spectral simulations and tomographic inversions for obtaining local Ti and v, comparisons of the expected count rate profiles to the requirements, the degradation of performance due to the nuclear radiation background, and measurements of the rejection of nuclear background by detector pulse-height discrimination. This work was performed under the auspices of the DOE by PPPL under contract DE-AC02-09CH11466 and by LLNL under contract DE-AC52-07NA27344.

  15. Photoionization Rate of Atomic Oxygen

    NASA Astrophysics Data System (ADS)

    Meier, R. R.; McLaughlin, B. M.; Warren, H. P.; Bishop, J.

    2006-05-01

    Accurate knowledge of the photoionization rate of atomic oxygen is important for the study and understanding of the ionospheres and emission processes of terrestrial, planetary, and cometary atmospheres. Past calculations of the photoionization rate have been carried out at various spectral resolutions, but none were at sufficiently high resolution to accommodate accidental resonances between solar emission lines and highly structured auto-ionization features in the photoionization cross section. A new version of the NRLEUV solar spectral irradiance model (at solar minimum) and a new model of the O photoionization cross section enable calculations at very high spectral resolution. We find unattenuated photoionization rates computed at 0.001 nm resolution are larger than those at moderate resolution (0.1 nm) by amounts approaching 20%. Allowing for attenuation in the terrestrial atmosphere, we find differences in photoionization rates computed at high and moderate resolution to vary with altitude, especially below 200 km where deviations of plus or minus 20% occur between the two cases.

  16. Evaluation of a New Backtrack Free Path Planning Algorithm for Manipulators

    NASA Astrophysics Data System (ADS)

    Islam, Md. Nazrul; Tamura, Shinsuke; Murata, Tomonari; Yanase, Tatsuro

    This paper evaluates a newly proposed backtrack free path planning algorithm (BFA) for manipulators. BFA is an exact algorithm, i.e. it is resolution complete. Different from existing resolution complete algorithms, its computation time and memory space are proportional to the number of arms. Therefore paths can be calculated within practical and predetermined time even for manipulators with many arms, and it becomes possible to plan complicated motions of multi-arm manipulators in fully automated environments. The performance of BFA is evaluated for 2-dimensional environments while changing the number of arms and obstacle placements. Its performance under locus and attitude constraints is also evaluated. Evaluation results show that the computation volume of the algorithm is almost the same as the theoretical one, i.e. it increases linearly with the number of arms even in complicated environments. Moreover BFA achieves the constant performance independent of environments.

  17. Preliminary design and performance of an advanced gamma ray spectrometer for future orbiter missions. [composition and evolution of planets

    NASA Technical Reports Server (NTRS)

    Metzger, A. E.; Parker, R. H.; Arnold, J. R.; Reedy, R. C.; Trombka, J. I.

    1975-01-01

    A knowledge of the composition of planets, satellites, and asteroids is of primary importance in understanding the formation and evolution of the solar system. Gamma-ray spectroscopy is capable of measuring the composition of meter-depth surface material from orbit around any body possessing little or no atmosphere. Measurement sensitivity is determined by detector efficiency and resolution, counting time, and the background flux while the effective spatial resolution depends upon the field-of-view and counting time together with the regional contrast in composition. The advantages of using germanium as a detector of gamma rays in space are illustrated experimentally and a compact instrument cooled by passive thermal radiation is described. Calculations of the expected sensitivity of this instrument at the Moon and Mars show that at least a dozen elements will be detected, twice the number which have been isolated in the Apollo gamma-ray data.

  18. Developing of the database of meteorological and radiation fields for Moscow region (urban reanalysis) for 1981-2014 period with high spatial and temporal resolution. Strategy and first results.

    NASA Astrophysics Data System (ADS)

    Konstantinov, Pavel; Varentsov, Mikhail; Platonov, Vladimir; Samsonov, Timofey; Zhdanova, Ekaterina; Chubarova, Natalia

    2017-04-01

    The main goal of this investigation is to develop a kind of "urban reanalysis" - the database of meteorological and radiation fields under Moscow megalopolis for period 1981-2014 with high spatial resolution. Main meteorological fields for Moscow region are reproduced with COSMO_CLM regional model (including urban parameters) with horizontal resolution 1x1 km. Time resolution of output fields is 1 hour. For radiation fields is quite useful to calculate SVF (Sky View Factor) for obtaining losses of UV radiation in complex urban conditions. Usually, the raster-based SVF analysis the shadow-casting algorithm proposed by Richens (1997) is popular (see Ratti and Richens 2004, Gal et al. 2008, for example). SVF image is obtained by combining shadow images obtained from different directions. An alternative is to use raster-based SVF calculation similar to vector approach using digital elevation model of urban relief. Output radiation field includes UV-radiation with horizontal resolution 1x1 km This study was financially supported by the Russian Foundation for Basic Research within the framework of the scientific project no. 15-35-21129 _mol_a_ved and project no 15-35-70006 mol_a_mos References: 1. Gal, T., Lindberg, F., and Unger, J., 2008. Computing continuous sky view factors using 3D urban raster and vector databases: comparison and application to urban climate. Theoretical and applied climatology, 95 (1-2), 111-123. 2. Richens, P., 1997. Image processing for urban scale environmental modelling. In: J.D. Spitler and J.L.M. Hensen, eds. th Intemational IBPSA Conference Building Simulation, Prague. 3. Ratti, C. and Richens, P., 2004. Raster analysis of urban form. Environment and Planning B: Planning and Design, 31 (2), 297-309.

  19. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    PubMed Central

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  20. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    PubMed

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  1. Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations

    NASA Astrophysics Data System (ADS)

    Linders, Viktor; Kupiainen, Marco; Nordström, Jan

    2017-07-01

    We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.

  2. Molecular dynamics simulations of field emission from a prolate spheroidal tip

    NASA Astrophysics Data System (ADS)

    Torfason, Kristinn; Valfells, Agust; Manolescu, Andrei

    2016-12-01

    High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission from a prolate spheroidal tip. The space charge limited current is several times lower than the current calculated with the Fowler-Nordheim formula. The image-charge is taken into account with a spherical approximation, which is good around the top of the tip, i.e., region where the current is generated.

  3. MICROMEGAS calibration for ACTAR TPC

    NASA Astrophysics Data System (ADS)

    Mauss, B.; Roger, T.; Pancin, J.; Damoy, S.; Grinyer, G. F.

    2018-02-01

    Active targets, such as the ACtive TARget and Time Projection Chamber (ACTAR TPC) being developed at GANIL, are detection systems that operate on the basis of a time projection chamber but where the filling gas also serves as a thick target for nuclear reactions. In nuclear physics experiments, the energy resolution is of primary importance to identify the reaction products and to precisely reconstruct level schemes of nuclei. These measurements are based on the energy deposited on a pixelated pad plane. A MICROMEGAS detector is used in ACTAR TPC for the ionization electron collection and amplification, and it is a major contributor to the energy dispersion through, for example, inhomogeneities of the amplification gap. A variation of one percent in the gap can lead to an amplitude variation of more than two percent which is of the same order as the resolution obtained with an energy deposition of 5 MeV. One way to calibrate the pad plane is through the use of a two dimensional source scanning table. It is used to calibrate the gain inhomogeneities and, using MAGBOLTZ calculations, deduce the corresponding gap variations. The inverse of this method would allow the relative gain variations to be calculated for the different gas mixtures and pressures used in experiments with ACTAR TPC.

  4. Near-field transport imaging applied to photovoltaic materials

    DOE PAGES

    Xiao, Chuanxiao; Jiang, Chun -Sheng; Moseley, John; ...

    2017-05-26

    We developed and applied a new analytical technique - near-field transport imaging (NF-TI or simply TI) - to photovoltaic materials. Charge-carrier transport is an important factor in solar cell performance, and TI is an innovative approach that integrates a scanning electron microscope with a near-field scanning optical microscope, providing the possibility to study luminescence associated with recombination and transport with high spatial resolution. In this paper, we describe in detail the technical barriers we had to overcome to develop the technique for routine application and the data-fitting procedure used to calculate minority-carrier diffusion length values. The diffusion length measured bymore » TI agrees well with the results calculated by time-resolved photoluminescence on well-controlled gallium arsenide (GaAs) thin-film samples. We report for the first time on measurements on thin-film cadmium telluride using this technique, including the determination of effective carrier diffusion length, as well as the first near-field imaging of the effect of a single localized defect on carrier transport and recombination in a GaAs heterostructure. Furthermore, by changing the scanning setup, we were able to demonstrate near-field cathodoluminescence (CL), and correlated the results with standard CL measurements. In conclusion, the TI technique shows great potential for mapping transport properties in solar cell materials with high spatial resolution.« less

  5. Dynamic Granger-Geweke causality modeling with application to interictal spike propagation

    PubMed Central

    Lin, Fa-Hsuan; Hara, Keiko; Solo, Victor; Vangel, Mark; Belliveau, John W.; Stufflebeam, Steven M.; Hamalainen, Matti S.

    2010-01-01

    A persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using Structural Equation Modeling and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis. We have developed a novel optimal method for obtaining data-driven directional causality estimates with high temporal resolution in both time and frequency domains. This is achieved by simultaneously optimizing the length of the analysis window and the chosen AR model order using the SURE criterion. Dynamic Granger-Geweke causality in time and frequency domains is subsequently calculated within a moving analysis window. We tested our algorithm by calculating the Granger-Geweke causality of epileptic spike propagation from the right frontal lobe to the left frontal lobe. The results quantitatively suggested the epileptic activity at the left frontal lobe was propagated from the right frontal lobe, in agreement with the clinical diagnosis. Our novel computational tool can be used to help elucidate complex directional interactions in the human brain. PMID:19378280

  6. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  7. Compton scatter tomography in TOF-PET

    NASA Astrophysics Data System (ADS)

    Hemmati, Hamidreza; Kamali-Asl, Alireza; Ay, Mohammadreza; Ghafarian, Pardis

    2017-10-01

    Scatter coincidences contain hidden information about the activity distribution on the positron emission tomography (PET) imaging system. However, in conventional reconstruction, the scattered data cause the blurring of images and thus are estimated and subtracted from detected coincidences. List mode format provides a new aspect to use time of flight (TOF) and energy information of each coincidence in the reconstruction process. In this study, a novel approach is proposed to reconstruct activity distribution using the scattered data in the PET system. For each single scattering coincidence, a scattering angle can be determined by the recorded energy of the detected photons, and then possible locations of scattering can be calculated based on the scattering angle. Geometry equations show that these sites lie on two arcs in 2D mode or the surface of a prolate spheroid in 3D mode, passing through the pair of detector elements. The proposed method uses a novel and flexible technique to estimate source origin locations from the possible scattering locations, using the TOF information. Evaluations were based on a Monte-Carlo simulation of uniform and non-uniform phantoms at different resolutions of time and detector energy. The results show that although the energy uncertainties deteriorate the image spatial resolution in the proposed method, the time resolution has more impact on image quality than the energy resolution. With progress of the TOF system, the reconstruction using the scattered data can be used in a complementary manner, or to improve image quality in the next generation of PET systems.

  8. Rippling ultrafast dynamics of suspended 2D monolayers, graphene

    PubMed Central

    Hu, Jianbo; Vanacore, Giovanni M.; Cepellotti, Andrea; Marzari, Nicola; Zewail, Ahmed H.

    2016-01-01

    Here, using ultrafast electron crystallography (UEC), we report the observation of rippling dynamics in suspended monolayer graphene, the prototypical and most-studied 2D material. The high scattering cross-section for electron/matter interaction, the atomic-scale spatial resolution, and the ultrafast temporal resolution of UEC represent the key elements that make this technique a unique tool for the dynamic investigation of 2D materials, and nanostructures in general. We find that, at early time after the ultrafast optical excitation, graphene undergoes a lattice expansion on a time scale of 5 ps, which is due to the excitation of short-wavelength in-plane acoustic phonon modes that stretch the graphene plane. On a longer time scale, a slower thermal contraction with a time constant of 50 ps is observed and associated with the excitation of out-of-plane phonon modes, which drive the lattice toward thermal equilibrium with the well-known negative thermal expansion coefficient of graphene. From our results and first-principles lattice dynamics and out-of-equilibrium relaxation calculations, we quantitatively elucidate the deformation dynamics of the graphene unit cell. PMID:27791028

  9. Rippling ultrafast dynamics of suspended 2D monolayers, graphene.

    PubMed

    Hu, Jianbo; Vanacore, Giovanni M; Cepellotti, Andrea; Marzari, Nicola; Zewail, Ahmed H

    2016-10-25

    Here, using ultrafast electron crystallography (UEC), we report the observation of rippling dynamics in suspended monolayer graphene, the prototypical and most-studied 2D material. The high scattering cross-section for electron/matter interaction, the atomic-scale spatial resolution, and the ultrafast temporal resolution of UEC represent the key elements that make this technique a unique tool for the dynamic investigation of 2D materials, and nanostructures in general. We find that, at early time after the ultrafast optical excitation, graphene undergoes a lattice expansion on a time scale of 5 ps, which is due to the excitation of short-wavelength in-plane acoustic phonon modes that stretch the graphene plane. On a longer time scale, a slower thermal contraction with a time constant of 50 ps is observed and associated with the excitation of out-of-plane phonon modes, which drive the lattice toward thermal equilibrium with the well-known negative thermal expansion coefficient of graphene. From our results and first-principles lattice dynamics and out-of-equilibrium relaxation calculations, we quantitatively elucidate the deformation dynamics of the graphene unit cell.

  10. Modelling the line shape of very low energy peaks of positron beam induced secondary electrons measured using a time of flight spectrometer

    NASA Astrophysics Data System (ADS)

    Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.

    2017-01-01

    In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.

  11. Data-resolution matrix and model-resolution matrix for Rayleigh-wave inversion using a damped least-squares method

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.

  12. Critical scales to explain urban hydrological response: an application in Cranbrook, London

    NASA Astrophysics Data System (ADS)

    Cristiano, Elena; ten Veldhuis, Marie-Claire; Gaitan, Santiago; Ochoa Rodriguez, Susana; van de Giesen, Nick

    2018-04-01

    Rainfall variability in space and time, in relation to catchment characteristics and model complexity, plays an important role in explaining the sensitivity of hydrological response in urban areas. In this work we present a new approach to classify rainfall variability in space and time and we use this classification to investigate rainfall aggregation effects on urban hydrological response. Nine rainfall events, measured with a dual polarimetric X-Band radar instrument at the CAESAR site (Cabauw Experimental Site for Atmospheric Research, NL), were aggregated in time and space in order to obtain different resolution combinations. The aim of this work was to investigate the influence that rainfall and catchment scales have on hydrological response in urban areas. Three dimensionless scaling factors were introduced to investigate the interactions between rainfall and catchment scale and rainfall input resolution in relation to the performance of the model. Results showed that (1) rainfall classification based on cluster identification well represents the storm core, (2) aggregation effects are stronger for rainfall than flow, (3) model complexity does not have a strong influence compared to catchment and rainfall scales for this case study, and (4) scaling factors allow the adequate rainfall resolution to be selected to obtain a given level of accuracy in the calculation of hydrological response.

  13. Vibration monitoring of a helicopter blade model using the optical fiber distributed strain sensing technique.

    PubMed

    Wada, Daichi; Igawa, Hirotaka; Kasai, Tokio

    2016-09-01

    We demonstrate a dynamic distributed monitoring technique using a long-length fiber Bragg grating (FBG) interrogated by optical frequency domain reflectometry (OFDR) that measures strain at a speed of 150 Hz, spatial resolution of 1 mm, and measurement range of 20 m. A 5 m FBG is bonded to a 5.5 m helicopter blade model, and vibration is applied by the step relaxation method. The time domain responses of the strain distributions are measured, and the blade deflections are calculated based on the strain distributions. Frequency response functions are obtained using the time domain responses of the calculated deflection induced by the preload release, and the modal parameters are retrieved. Experimental results demonstrated the dynamic monitoring performances and the applicability to the modal analysis of the OFDR-FBG technique.

  14. Analysis of the variability of extra-tropical cyclones at the regional scale for the coasts of Northern Germany and investigation of their coastal impacts

    NASA Astrophysics Data System (ADS)

    Schaaf, Benjamin; Feser, Frauke

    2015-04-01

    The evaluation of long-term changes in wind speeds is very important for the coastal areas and the protection measures. Therefor the wind variability at the regional scale for the coast of Northern Germany shall be analysed. In order to derive changes in storminess it is essential to analyse long, homogeneous meteorological time series. Wind measurements often suffer from inconsistencies which arise from changes in instrumentation, observation method, or station location. Reanalysis data take into account such inhomogeneities of observation data and convert these measurements into a consistent, gridded data set with the same grid spacing and time intervals. This leads to a smooth, homogeneous data set, but with relatively low resolution (about 210 km for the longest reanalysis data set, the NCEP reanalysis starting in 1948). Therefore a high-resolution regional atmospheric model will be used to bring these reanalyses to a higher resolution, using in addition to a dynamical downscaling approach the spectral nudging technique. This method 'nudges' the large spatial scales of the regional climate model towards the reanalysis, while the smaller spatial scales are left unchanged. It was applied successfully in a number of applications, leading to realistic atmospheric weather descriptions of the past. With the regional climate model COSMO-CLM a very high-resolution data set was calculated for the last 67 years, the period from 1948 until now. The model area is North Germany with the coastal area of the North sea and parts of the Baltic sea. This is one of the first model simulations on climate scale with a very high resolution of 2.8 km, so even small scale effects can be detected. With this hindcast-simulation there are numerous options of evaluation. One can create wind climatologies for regional areas such as for the metropolitan region of Hamburg. Otherwise one can investigate individual storms in a case study. With a filtering and tracking program the course of individual storms can be tracked and compared with observations. Also statistical studies can be done and one can calculate percentiles, return periods and other different extreme value statistic variables. Later, with a further nesting simulation, the resolution can be reduced to 1 km for individual areas of interest to analyse small islands (as Foehr or Amrum) and their effects on the atmospheric flow more closely.

  15. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    2017-01-05

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  16. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  17. Time domain para hydrogen induced polarization.

    PubMed

    Ratajczyk, Tomasz; Gutmann, Torsten; Dillenberger, Sonja; Abdulhussaein, Safaa; Frydel, Jaroslaw; Breitzke, Hergen; Bommerich, Ute; Trantzschel, Thomas; Bernarding, Johannes; Magusin, Pieter C M M; Buntkowsky, Gerd

    2012-01-01

    Para hydrogen induced polarization (PHIP) is a powerful hyperpolarization technique, which increases the NMR sensitivity by several orders of magnitude. However the hyperpolarized signal is created as an anti-phase signal, which necessitates high magnetic field homogeneity and spectral resolution in the conventional PHIP schemes. This hampers the application of PHIP enhancement in many fields, as for example in food science, materials science or MRI, where low B(0)-fields or low B(0)-homogeneity do decrease spectral resolution, leading to potential extinction if in-phase and anti-phase hyperpolarization signals cannot be resolved. Herein, we demonstrate that the echo sequence (45°-τ-180°-τ) enables the acquisition of low resolution PHIP enhanced liquid state NMR signals of phenylpropiolic acid derivatives and phenylacetylene at a low cost low-resolution 0.54 T spectrometer. As low field TD-spectrometers are commonly used in industry or biomedicine for the relaxometry of oil-water mixtures, food, nano-particles, or other systems, we compare two variants of para-hydrogen induced polarization with data-evaluation in the time domain (TD-PHIP). In both TD-ALTADENA and the TD-PASADENA strong spin echoes could be detected under conditions when usually no anti-phase signals can be measured due to the lack of resolution. The results suggest that the time-domain detection of PHIP-enhanced signals opens up new application areas for low-field PHIP-hyperpolarization, such as non-invasive compound detection or new contrast agents and biomarkers in low-field Magnetic Resonance Imaging (MRI). Finally, solid-state NMR calculations are presented, which show that the solid echo (90y-τ-90x-τ) version of the TD-ALTADENA experiment is able to convert up to 10% of the PHIP signal into visible magnetization. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-07

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  19. Using JPSS VIIRS Fire Radiative Power Data to Forecast Biomass Burning Emissions and Smoke Transport by the High Resolution Rapid Refresh Model

    NASA Astrophysics Data System (ADS)

    Ahmadov, R.; Grell, G. A.; James, E.; Alexander, C.; Stewart, J.; Benjamin, S.; McKeen, S. A.; Csiszar, I. A.; Tsidulko, M.; Pierce, R. B.; Pereira, G.; Freitas, S. R.; Goldberg, M.

    2017-12-01

    We present a new real-time smoke modeling system, the High Resolution Rapid Refresh coupled with smoke (HRRR-Smoke), to simulate biomass burning (BB) emissions, plume rise and smoke transport in real time. The HRRR is the NOAA Earth System Research Laboratory's 3km grid spacing version of the Weather Research and Forecasting (WRF) model used for weather forecasting. Here we make use of WRF-Chem (the WRF model coupled with chemistry) and simulate fine particulate matter (smoke) emissions emitted by BB. The HRRR-Smoke modeling system ingests fire radiative power (FRP) data from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership (S-NPP) satellite to calculate BB emissions. The FRP product is based on processing 750m resolution "M" bands. The algorithms for fire detection and FRP retrieval are consistent with those used to generate the MODIS fire detection data. For the purpose of ingesting VIIRS fire data into the HRRR-Smoke model, text files are generated to provide the location and detection confidence of fire pixels, as well as FRP. The VIIRS FRP data from the text files are processed and remapped over the HRRR-Smoke model domains. We process the FRP data to calculate BB emissions (smoldering part) and fire size for the model input. In addition, HRRR-Smoke uses the FRP data to simulate the injection height for the flaming emissions using concurrently simulated meteorological fields by the model. Currently, there are two 3km resolution domains covering the contiguous US and Alaska which are used to simulate smoke in real time. In our presentation, we focus on the CONUS domain. HRRR-Smoke is initialized 4 times per day to forecast smoke concentrations for the next 36 hours. The VIIRS FRP data, as well as near-surface and vertically integrated smoke mass concentrations are visualized for every forecast hour. These plots are provided to the public via the HRRR-Smoke web-page: https://rapidrefresh.noaa.gov/HRRRsmoke/. Model evaluations for a case study are presented, where simulated smoke concentrations are compared with hourly PM2.5 measurements from EPA's Air Quality System network. These comparisons demonstrate the model's ability in simulating high aerosol loadings during major wildfire events in the western US.

  20. Atmospheric Sensitivity to Spectral Top-of-Atmosphere Solar Irradiance Perturbations, Using MODTRAN-5 Radiative Transfer Algorithm

    NASA Astrophysics Data System (ADS)

    Anderson, G.; Berk, A.; Harder, G.; Fontenla, J.; Shettle, E.; Pilewski, P.; Kindel, B.; Chetwynd, J.; Gardner, J.; Hoke, M.; Jordan, A.; Lockwood, R.; Felde, G.; Archarya, P.

    2006-12-01

    The opportunity to insert state-of-the-art solar irradiance measurements and calculations, with subtle perturbations, into a narrow spectral resolution radiative transfer model has recently been facilitated through release of MODTRAN-5 (MOD5). The new solar data are from: (1) SORCE satellite measurements of solar variability over solar rotation cycle, & (2) ultra-narrow calculation of a new solar source irradiance, extending over the full MOD5 spectral range, from 0.2 um to far-IR. MODTRAN-5, MODerate resolution radiance and TRANsmittance code, has been developed collaboratively by Air Force Research Laboratory and Spectral Sciences, Inc., with history dating back to LOWTRAN. It includes approximations for all local thermodynamic equilibrium terms associated with molecular, cloud, aerosol and surface components for emission, scattering, and reflectance, including multiple scattering, refraction and a statistical implementation of Correlated-k averaging. The band model is based on 0.1 cm-1 (also 1.0, 5.0 and 15.0 cm-1 statistical binning for line centers within the interval, captured through an exact formulation of the full Voigt line shape. Spectroscopic parameters are from HITRAN 2004 with user-defined options for additional gases. Recent validation studies show MOD5 replicates line-by-line brightness temperatures to within ~0.02ºK average and <1.0ºK RMS. MOD5 can then serve as a surrogate for a variety of perturbation studies, including the two modes for the solar source function, Io. (1) Data from the Solar Radiation and Climate Experiment (SORCE) satellite mission provide state-of-the-art measurements of UV, visible, near-IR, plus total solar radiation, on near real-time basis. These internally consistent estimates of Sun's output over solar rotation and longer time scales are valuable inputs for studying effects of Sun's radiation on Earth's atmosphere and climate. When solar rotation encounters bright plage and dark sunspots, relative variations are expected to be very small in visible wavelengths, although absolute power is substantial. SORCE's Spectral Irradiance Monitor measurements are readily included in comparative MOD5 calculations. (2) The embedded solar irradiance within MOD5 must be compatible with the chosen band model resolution binning. By matching resolutions some issues related to the correlated-k band model parameterizations can be tested. Two high resolution solar irradiances, the MOD5 default irradiance (Kurucz) and a new compilation associated with Solar Radiation Physical Modeling project (Fontenla), are compared to address the potential impact of discrepancies between any sets of irradiances. The magnitude of solar variability, as measured and calculated, can lead to subtle changes in heating/cooling rates throughout the atmosphere, as a function of altitude and wavelength. By holding chemical & dynamical responses constant, only controlled distributions of absorbing gases, aerosols and clouds will contribute to observed 1st order radiative effects.

  1. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.

  2. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    PubMed

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  3. Effects of microstructure on water imbibition in sandstones using X-ray computed tomography and neutron radiography

    NASA Astrophysics Data System (ADS)

    Zhao, Yixin; Xue, Shanbin; Han, Songbai; Chen, Zhongwei; Liu, Shimin; Elsworth, Derek; He, Linfeng; Cai, Jianchao; Liu, Yuntao; Chen, Dongfeng

    2017-07-01

    Capillary imbibition in variably saturated porous media is important in defining displacement processes and transport in the vadose zone and in low-permeability barriers and reservoirs. Nonintrusive imaging in real time offers the potential to examine critical impacts of heterogeneity and surface properties on imbibition dynamics. Neutron radiography is applied as a powerful imaging tool to observe temporal changes in the spatial distribution of water in porous materials. We analyze water imbibition in both homogeneous and heterogeneous low-permeability sandstones. Dynamic observations of the advance of the imbibition front with time are compared with characterizations of microstructure (via high-resolution X-ray computed tomography (CT)), pore size distribution (Mercury Intrusion Porosimetry), and permeability of the contrasting samples. We use an automated method to detect the progress of wetting front with time and link this to square-root-of-time progress. These data are used to estimate the effect of microstructure on water sorptivity from a modified Lucas-Washburn equation. Moreover, a model is established to calculate the maximum capillary diameter by modifying the Hagen-Poiseuille and Young-Laplace equations based on fractal theory. Comparing the calculated maximum capillary diameter with the maximum pore diameter (from high-resolution CT) shows congruence between the two independent methods for the homogeneous silty sandstone but less effectively for the heterogeneous sandstone. Finally, we use these data to link observed response with the physical characteristics of the contrasting media—homogeneous versus heterogeneous—and to demonstrate the sensitivity of sorptivity expressly to tortuosity rather than porosity in low-permeability sandstones.

  4. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  5. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

  6. Signal and background considerations for the MRSt on the National Ignition Facility (NIF).

    PubMed

    Wink, C W; Frenje, J A; Hilsabeck, T J; Bionta, R; Khater, H Y; Gatu Johnson, M; Kilkenny, J D; Li, C K; Séguin, F H; Petrasso, R D

    2016-11-01

    A Magnetic Recoil Spectrometer (MRSt) has been conceptually designed for time-resolved measurements of the neutron spectrum at the National Ignition Facility. Using the MRSt, the goals are to measure the time-evolution of the spectrum with a time resolution of ∼20-ps and absolute accuracy better than 5%. To meet these goals, a detailed understanding and optimization of the signal and background characteristics are required. Through ion-optics, MCNP simulations, and detector-response calculations, it is demonstrated that the goals and a signal-to background >5-10 for the down-scattered neutron measurement are met if the background, consisting of ambient neutrons and gammas, at the MRSt is reduced 50-100 times.

  7. Towards real-time medical diagnostics using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Bjorgan, Asgeir; Randeberg, Lise L.

    2015-07-01

    Hyperspectral imaging provides non-contact, high resolution spectral images which has a substantial diagnostic potential. This can be used for e.g. diagnosis and early detection of arthritis in finger joints. Processing speed is currently a limitation for clinical use of the technique. A real-time system for analysis and visualization using GPU processing and threaded CPU processing is presented. Images showing blood oxygenation, blood volume fraction and vessel enhanced images are among the data calculated in real-time. This study shows the potential of real-time processing in this context. A combination of the processing modules will be used in detection of arthritic finger joints from hyperspectral reflectance and transmittance data.

  8. Far-IR measurements at Cerro Toco, Chile: FIRST, REFIR, and AERI

    NASA Astrophysics Data System (ADS)

    Cageao, Richard P.; Alford, J. Ashley; Johnson, David G.; Kratz, David P.; Mlynczak, Martin G.

    2010-09-01

    In mid-2009, the Radiative Heating in the Underexplored Bands Campaign II (RHUBC-II) was conducted from Cerro Toco, Chile, a high, dry, remote mountain plateau, 23°S , 67.8°W at 5.4km, in the Atacama Desert of Northern Chile. From this site, dominant IR water vapor absorption bands and continuum, saturated when viewed from the surface at lower altitudes, or in less dry locales, were investigated in detail, elucidating infrared (IR) absorption and emission in the atmosphere. Three Fourier Transform InfraRed (FTIR) instruments were at the site, the Far-Infrared Spectroscopy of the Troposphere (FIRST), the Radiation Explorer in the Far Infrared (REFIR), and the Atmospheric Emitted Radiance Interferometer (AERI). In a side-by-side comparison, these measured atmospheric downwelling radiation, with overlapping spectral coverage from 5 to 100μm (2000 to 100cm-1), and instrument spectral resolutions from 0.5 to 0.643cm-1, unapodized. In addition to the FTIR and other ground-based IR and microwave instrumentation, pressure/temperature/relative humidity measuring sondes, for atmospheric profiles to 18km, were launched from the site several times a day. The derived water vapor profiles, determined at times matching the FTIR measurement times, were used to model atmospheric radiative transfer. Comparison of instrument data, all at the same spectral resolution, and model calculations, are presented along with a technique for determining adjustments to line-by-line calculation continuum models. This was a major objective of the campaign.

  9. Far-IR Measurements at Cerro Toco, Chile: FIRST, REFIR, and AERI

    NASA Technical Reports Server (NTRS)

    Cageao, Richard P.; Alford, J. Ashley; Johnson, David G.; Kratz, David P.; Mlynczak, Martin G.

    2010-01-01

    In mid-2009, the Radiative Heating in the Underexplored Bands Campaign II (RHUBC-II) was conducted from Cerro Toco, Chile, a high, dry, remote mountain plateau, 23degS , 67.8degW at 5.4km, in the Atacama Desert of Northern Chile. From this site, dominant IR water vapor absorption bands and continuum, saturated when viewed from the surface at lower altitudes, or in less dry locales, were investigated in detail, elucidating IR absorption and emission in the atmosphere. Three FTIR instruments were at the site, the Far-Infrared Spectroscopy of the Troposphere (FIRST), the Radiation Explorer in the Far Infrared (REFIR), and the Atmospheric Emitted Radiance Interferometer (AERI). In a side-by-side comparison, these measured atmospheric downwelling radiation, with overlapping spectral coverage from 5 to100um (2000 to 100/cm), and instrument spectral resolutions from 0.5 to 0.64/cm, unapodized. In addition to the FTIR and other ground-based IR and microwave instrumentation, pressure/temperature/relative humidity measuring sondes, for atmospheric profiles to 18km, were launched from the site several times a day. The derived water vapor profiles, determined at times matching the FTIR measurement times, were used to model atmospheric radiative transfer. Comparison of instrument data, all at the same spectral resolution, and model calculations, are presented along with a technique for determining adjustments to line-by-line calculation continuum models. This was a major objective of the campaign.

  10. A tool for NDVI time series extraction from wide-swath remotely sensed images

    NASA Astrophysics Data System (ADS)

    Li, Zhishan; Shi, Runhe; Zhou, Cong

    2015-09-01

    Normalized Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring the vegetation coverage in land surface. The time series features of NDVI are capable of reflecting dynamic changes of various ecosystems. Calculating NDVI via Moderate Resolution Imaging Spectrometer (MODIS) and other wide-swath remotely sensed images provides an important way to monitor the spatial and temporal characteristics of large-scale NDVI. However, difficulties are still existed for ecologists to extract such information correctly and efficiently because of the problems in several professional processes on the original remote sensing images including radiometric calibration, geometric correction, multiple data composition and curve smoothing. In this study, we developed an efficient and convenient online toolbox for non-remote sensing professionals who want to extract NDVI time series with a friendly graphic user interface. It is based on Java Web and Web GIS technically. Moreover, Struts, Spring and Hibernate frameworks (SSH) are integrated in the system for the purpose of easy maintenance and expansion. Latitude, longitude and time period are the key inputs that users need to provide, and the NDVI time series are calculated automatically.

  11. Investigation of optimal acquisition time of myocardial perfusion scintigraphy using cardiac focusing-collimator

    NASA Astrophysics Data System (ADS)

    Niwa, Arisa; Abe, Shinji; Fujita, Naotoshi; Kono, Hidetaka; Odagawa, Tetsuro; Fujita, Yusuke; Tsuchiya, Saki; Kato, Katsuhiko

    2015-03-01

    Recently myocardial perfusion SPECT imaging acquired using the cardiac focusing-collimator (CF) has been developed in the field of nuclear cardiology. Previously we have investigated the basic characteristics of CF using physical phantoms. This study was aimed at determining the acquisition time for CF that enables to acquire the SPECT images equivalent to those acquired by the conventional method in 201TlCl myocardial perfusion SPECT. In this study, Siemens Symbia T6 was used by setting the torso phantom equipped with the cardiac, pulmonary, and hepatic components. 201TlCl solution were filled in the left ventricular (LV) myocardium and liver. Each of CF, the low energy high resolution collimator (LEHR), and the low medium energy general purpose collimator (LMEGP) was set on the SPECT equipment. Data acquisitions were made by regarding the center of the phantom as the center of the heart in CF at various acquisition times. Acquired data were reconstructed, and the polar maps were created from the reconstructed images. Coefficient of variation (CV) was calculated as the mean counts determined on the polar maps with their standard deviations. When CF was used, CV was lower at longer acquisition times. CV calculated from the polar maps acquired using CF at 2.83 min of acquisition time was equivalent to CV calculated from those acquired using LEHR in a 180°acquisition range at 20 min of acquisition time.

  12. High-resolution time series of Pseudomonas aeruginosa gene expression and rhamnolipid secretion through growth curve synchronization.

    PubMed

    van Ditmarsch, Dave; Xavier, João B

    2011-06-17

    Online spectrophotometric measurements allow monitoring dynamic biological processes with high-time resolution. Contrastingly, numerous other methods require laborious treatment of samples and can only be carried out offline. Integrating both types of measurement would allow analyzing biological processes more comprehensively. A typical example of this problem is acquiring quantitative data on rhamnolipid secretion by the opportunistic pathogen Pseudomonas aeruginosa. P. aeruginosa cell growth can be measured by optical density (OD600) and gene expression can be measured using reporter fusions with a fluorescent protein, allowing high time resolution monitoring. However, measuring the secreted rhamnolipid biosurfactants requires laborious sample processing, which makes this an offline measurement. Here, we propose a method to integrate growth curve data with endpoint measurements of secreted metabolites that is inspired by a model of exponential cell growth. If serial diluting an inoculum gives reproducible time series shifted in time, then time series of endpoint measurements can be reconstructed using calculated time shifts between dilutions. We illustrate the method using measured rhamnolipid secretion by P. aeruginosa as endpoint measurements and we integrate these measurements with high-resolution growth curves measured by OD600 and expression of rhamnolipid synthesis genes monitored using a reporter fusion. Two-fold serial dilution allowed integrating rhamnolipid measurements at a ~0.4 h-1 frequency with high-time resolved data measured at a 6 h-1 frequency. We show how this simple method can be used in combination with mutants lacking specific genes in the rhamnolipid synthesis or quorum sensing regulation to acquire rich dynamic data on P. aeruginosa virulence regulation. Additionally, the linear relation between the ratio of inocula and the time-shift between curves produces high-precision measurements of maximum specific growth rates, which were determined with a precision of ~5.4%. Growth curve synchronization allows integration of rich time-resolved data with endpoint measurements to produce time-resolved quantitative measurements. Such data can be valuable to unveil the dynamic regulation of virulence in P. aeruginosa. More generally, growth curve synchronization can be applied to many biological systems thus helping to overcome a key obstacle in dynamic regulation: the scarceness of quantitative time-resolved data.

  13. High Resolution Geological Site Characterization Utilizing Ground Motion Data

    DTIC Science & Technology

    1992-06-26

    Hayward, 1992). 15 Acquistion I 16 The source characterization array was composed of 28 stations evenly 17 distributed on the circumference of a...of analog anti alias filters, no prefiltering was applied during II acquistion . 12 Results 13 We deployed 9 different sources within the source...calculated using a 1024 point Hamming window applied to 32 the original 1000 point detrended and padded time series. These are then contoured as a 33

  14. Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation

    DTIC Science & Technology

    2009-09-01

    high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by

  15. Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector.

    PubMed

    Fan, Peng; Ma, Tianyu; Wei, Qingyang; Yao, Rutao; Liu, Yaqiang; Wang, Shi

    2016-02-07

    The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods.

  16. Performance of post-processing algorithms for rainfall intensity using measurements from tipping-bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Stagnaro, Mattia; Colli, Matteo; Lanza, Luca Giovanni; Chan, Pak Wai

    2016-11-01

    Eight rainfall events recorded from May to September 2013 at Hong Kong International Airport (HKIA) have been selected to investigate the performance of post-processing algorithms used to calculate the rainfall intensity (RI) from tipping-bucket rain gauges (TBRGs). We assumed a drop-counter catching-type gauge as a working reference and compared rainfall intensity measurements with two calibrated TBRGs operated at a time resolution of 1 min. The two TBRGs differ in their internal mechanics, one being a traditional single-layer dual-bucket assembly, while the other has two layers of buckets. The drop-counter gauge operates at a time resolution of 10 s, while the time of tipping is recorded for the two TBRGs. The post-processing algorithms employed for the two TBRGs are based on the assumption that the tip volume is uniformly distributed over the inter-tip period. A series of data of an ideal TBRG is reconstructed using the virtual time of tipping derived from the drop-counter data. From the comparison between the ideal gauge and the measurements from the two real TBRGs, the performances of different post-processing and correction algorithms are statistically evaluated over the set of recorded rain events. The improvement obtained by adopting the inter-tip time algorithm in the calculation of the RI is confirmed. However, by comparing the performance of the real and ideal TBRGs, the beneficial effect of the inter-tip algorithm is shown to be relevant for the mid-low range (6-50 mmh-1) of rainfall intensity values (where the sampling errors prevail), while its role vanishes with increasing RI in the range where the mechanical errors prevail.

  17. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  18. Quasiclassical treatment of the Auger effect in slow ion-atom collisions

    NASA Astrophysics Data System (ADS)

    Frémont, F.

    2017-09-01

    A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.

  19. Chromatic Aberration Correction for Atomic Resolution TEM Imaging from 20 to 80 kV.

    PubMed

    Linck, Martin; Hartel, Peter; Uhlemann, Stephan; Kahl, Frank; Müller, Heiko; Zach, Joachim; Haider, Max; Niestadt, Marcel; Bischoff, Maarten; Biskupek, Johannes; Lee, Zhongbo; Lehnert, Tibor; Börrnert, Felix; Rose, Harald; Kaiser, Ute

    2016-08-12

    Atomic resolution in transmission electron microscopy of thin and light-atom materials requires a rigorous reduction of the beam energy to reduce knockon damage. However, at the same time, the chromatic aberration deteriorates the resolution of the TEM image dramatically. Within the framework of the SALVE project, we introduce a newly developed C_{c}/C_{s} corrector that is capable of correcting both the chromatic and the spherical aberration in the range of accelerating voltages from 20 to 80 kV. The corrector allows correcting axial aberrations up to fifth order as well as the dominating off-axial aberrations. Over the entire voltage range, optimum phase-contrast imaging conditions for weak signals from light atoms can be adjusted for an optical aperture of at least 55 mrad. The information transfer within this aperture is no longer limited by chromatic aberrations. We demonstrate the performance of the microscope using the examples of 30 kV phase-contrast TEM images of graphene and molybdenum disulfide, showing unprecedented contrast and resolution that matches image calculations.

  20. A Ground-Based Profiling Differential Absorption LIDAR System for Measuring CO2 in the Planetary Boundary Layer

    NASA Technical Reports Server (NTRS)

    Andrews, Arlyn E.; Burris, John F.; Abshire, James B.; Krainak, Michael A.; Riris, Haris; Sun, Xiao-Li; Collatz, G. James

    2002-01-01

    Ground-based LIDAR observations can potentially provide continuous profiles of CO2 through the planetary boundary layer and into the free troposphere. We will present initial atmospheric measurements from a prototype system that is based on components developed by the telecommunications industry. Preliminary measurements and instrument performance calculations indicate that an optimized differential absorption LIDAR (DIAL) system will be capable of providing continuous hourly averaged profiles with 250m vertical resolution and better than 1 ppm precision at 1 km. Precision increases (decreases) at lower (higher) altitudes and is directly proportional to altitude resolution and acquisition time. Thus, precision can be improved if temporal or vertical resolution is sacrificed. Our approach measures absorption by CO2 of pulsed laser light at 1.6 microns backscattered from atmospheric aerosols. Aerosol concentrations in the planetary boundary layer are relatively high and are expected to provide adequate signal returns for the desired resolution. The long-term goal of the project is to develop a rugged, autonomous system using only commercially available components that can be replicated inexpensively for deployment in a monitoring network.

  1. Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation

    NASA Astrophysics Data System (ADS)

    Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra

    2017-12-01

    Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.

  2. Systematic study and comparison of photonic nanojets produced by dielectric microparticles in 2D- and 3D-spatial configurations

    NASA Astrophysics Data System (ADS)

    Geints, Yu E.; Zemlyanov, A. A.; Minin, O. V.; Minin, I. V.

    2018-06-01

    We present the systematic study of key characteristics (field intensity enhancement, spatial extents) of the 2D- and 3D-photonic nanojets (PNJs) produced by geometrically-regular micron-sized dielectric particles illuminated by a plane laser wave. By means of the finite-difference time-domain calculations, we highlight the differences and similarities between PNJs in these two spatial configurations for curved- (sphere, circular cylinder) and rectangle-shaped scatterers (cube, square bar). Our findings can be useful, for example, for the design of particle-based high-resolution imaging because the spatial resolution by such systems might be further controlled by the optimization of refractive index contrast and geometrical shape of the particle-lens.

  3. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  4. Resolution Enhanced Magnetic Sensing System for Wide Coverage Real Time UXO Detection

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Bregman, Yuri; Salomonski, Nizan; Zafrir, Hovav

    2012-09-01

    In this paper we present a new high resolution automatic detection algorithm based upon a Wavelet transform and then validate it in marine related experiments. The proposed approach allows obtaining an automatic detection in a very low signal to noise ratios. The amount of calculations is reduced, the magnetic trend is depressed and the probability of detection/ false alarm rate can easily be controlled. Moreover, the algorithm enables to distinguish between close targets. In the algorithm we use the physical dependence of the magnetic field of a magnetic dipole in order to define a Wavelet mother function that later on can detect magnetic targets modeled as dipoles and embedded in noisy surrounding, at improved resolution. The proposed algorithm was realized on synthesized targets and then validated in field experiments involving a marine surface-floating system for wide coverage real time unexploded ordinance (UXO) detection and mapping. The detection probability achieved in the marine experiment was above 90%. The horizontal radial error of most of the detected targets was only 16 m and two baseline targets that were immersed about 20 m one to another could easily be distinguished.

  5. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  6. Optimization of Collision Detection in Surgical Simulations

    NASA Astrophysics Data System (ADS)

    Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu

    2014-11-01

    Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality

  7. Watching a signaling protein function in real time via 100-ps time-resolved Laue crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schotte, Friedrich; Cho, Hyun Sun; Kaila, Ville R.I.

    2012-11-06

    To understand how signaling proteins function, it is necessary to know the time-ordered sequence of events that lead to the signaling state. We recently developed on the BioCARS 14-IDB beamline at the Advanced Photon Source the infrastructure required to characterize structural changes in protein crystals with near-atomic spatial resolution and 150-ps time resolution, and have used this capability to track the reversible photocycle of photoactive yellow protein (PYP) following trans-to-cis photoisomerization of its p-coumaric acid (pCA) chromophore over 10 decades of time. The first of four major intermediates characterized in this study is highly contorted, with the pCA carbonyl rotatedmore » nearly 90° out of the plane of the phenolate. A hydrogen bond between the pCA carbonyl and the Cys69 backbone constrains the chromophore in this unusual twisted conformation. Density functional theory calculations confirm that this structure is chemically plausible and corresponds to a strained cis intermediate. This unique structure is short-lived (~600 ps), has not been observed in prior cryocrystallography experiments, and is the progenitor of intermediates characterized in previous nanosecond time-resolved Laue crystallography studies. The structural transitions unveiled during the PYP photocycle include trans/cis isomerization, the breaking and making of hydrogen bonds, formation/relaxation of strain, and gated water penetration into the interior of the protein. This mechanistically detailed, near-atomic resolution description of the complete PYP photocycle provides a framework for understanding signal transduction in proteins, and for assessing and validating theoretical/computational approaches in protein biophysics.« less

  8. Quality of terrestrial data derived from UAV photogrammetry: a case study of the Hetao irrigation district in northern China

    NASA Astrophysics Data System (ADS)

    Zhang, Hongming; Baartman, Jantiene E. M.; Yang, Xiaomei; Gai, Lingtong; Geissen, Violette

    2017-04-01

    Most crops in northern China are irrigated, but the topography affects water use, soil erosion, runoff and yields,. Technologies for collecting high-resolution topographic data are essential for adequately assessing these effects. Ground surveys and techniques of light detection and ranging have good accuracy, but data acquisition can be time-consuming and expensive for large catchments. Recent rapid technological development has provided new, flexible, high-resolution methods for collecting topographic data, such as photogrammetry using unmanned aerial vehicles (UAVs). The accuracy of UAV photogrammetry for generating high-resolution digital elevation models (DEMs) and for determining the width of irrigation channels, however, has not been assessed. We used a fixed-wing UAV for collecting high-resolution (0.15 m) topographic data for the Hetao irrigation district, the third largest irrigation district in China. We surveyed 112 ground checkpoints (GCPs) using a real-time kinematic global positioning system to evaluate the accuracy of the DEMs and channel widths. A comparison of manually measured channel widths with the widths derived from the DEMs indicated that the DEM-derived widths had vertical and horizontal root mean square errors of 13.0 and 7.9 cm, respectively. UAV photogrammetric data can thus be used for land surveying, digital mapping, calculating channel capacity, monitoring crops, and predicting yields, with the advantages of economy, speed, and ease.

  9. Prospects for detecting oxygen, water, and chlorophyll on an exo-Earth

    PubMed Central

    Brandt, Timothy D.; Spiegel, David S.

    2014-01-01

    The goal of finding and characterizing nearby Earth-like planets is driving many NASA high-contrast flagship mission concepts, the latest of which is known as the Advanced Technology Large-Aperture Space Telescope (ATLAST). In this article, we calculate the optimal spectral resolution R = λ/δλ and minimum signal-to-noise ratio per spectral bin (SNR), two central design requirements for a high-contrast space mission, to detect signatures of water, oxygen, and chlorophyll on an Earth twin. We first develop a minimally parametric model and demonstrate its ability to fit synthetic and observed Earth spectra; this allows us to measure the statistical evidence for each component’s presence. We find that water is the easiest to detect, requiring a resolution R ≳ 20, while the optimal resolution for oxygen is likely to be closer to R = 150, somewhat higher than the canonical value in the literature. At these resolutions, detecting oxygen will require approximately two times the SNR as water. Chlorophyll requires approximately six times the SNR as oxygen for an Earth twin, only falling to oxygen-like levels of detectability for a low cloud cover and/or a large vegetation covering fraction. This suggests designing a mission for sensitivity to oxygen and adopting a multitiered observing strategy, first targeting water, then oxygen on the more favorable planets, and finally chlorophyll on only the most promising worlds. PMID:25197095

  10. Prospects for detecting oxygen, water, and chlorophyll on an exo-Earth.

    PubMed

    Brandt, Timothy D; Spiegel, David S

    2014-09-16

    The goal of finding and characterizing nearby Earth-like planets is driving many NASA high-contrast flagship mission concepts, the latest of which is known as the Advanced Technology Large-Aperture Space Telescope (ATLAST). In this article, we calculate the optimal spectral resolution R = λ/δλ and minimum signal-to-noise ratio per spectral bin (SNR), two central design requirements for a high-contrast space mission, to detect signatures of water, oxygen, and chlorophyll on an Earth twin. We first develop a minimally parametric model and demonstrate its ability to fit synthetic and observed Earth spectra; this allows us to measure the statistical evidence for each component's presence. We find that water is the easiest to detect, requiring a resolution R ≳ 20, while the optimal resolution for oxygen is likely to be closer to R = 150, somewhat higher than the canonical value in the literature. At these resolutions, detecting oxygen will require approximately two times the SNR as water. Chlorophyll requires approximately six times the SNR as oxygen for an Earth twin, only falling to oxygen-like levels of detectability for a low cloud cover and/or a large vegetation covering fraction. This suggests designing a mission for sensitivity to oxygen and adopting a multitiered observing strategy, first targeting water, then oxygen on the more favorable planets, and finally chlorophyll on only the most promising worlds.

  11. Calculation of the spatial resolution in two-photon absorption spectroscopy applied to plasma diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Lechuga, M.; Laser Processing Group, Instituto de Óptica “Daza de Valdés,” CSIC, 28006-Madrid; Fuentes, L. M.

    2014-10-07

    We report a detailed characterization of the spatial resolution provided by two-photon absorption spectroscopy suited for plasma diagnosis via the 1S-2S transition of atomic hydrogen for optogalvanic detection and laser induced fluorescence (LIF). A precise knowledge of the spatial resolution is crucial for a correct interpretation of measurements, if the plasma parameters to be analysed undergo strong spatial variations. The present study is based on a novel approach which provides a reliable and realistic determination of the spatial resolution. Measured irradiance distribution of laser beam waists in the overlap volume, provided by a high resolution UV camera, are employed tomore » resolve coupled rate equations accounting for two-photon excitation, fluorescence decay and ionization. The resulting three-dimensional yield distributions reveal in detail the spatial resolution for optogalvanic and LIF detection and related saturation due to depletion. Two-photon absorption profiles broader than the Fourier transform-limited laser bandwidth are also incorporated in the calculations. The approach allows an accurate analysis of the spatial resolution present in recent and future measurements.« less

  12. Prototype design of singles processing unit for the small animal PET

    NASA Astrophysics Data System (ADS)

    Deng, P.; Zhao, L.; Lu, J.; Li, B.; Dong, R.; Liu, S.; An, Q.

    2018-05-01

    Position Emission Tomography (PET) is an advanced clinical diagnostic imaging technique for nuclear medicine. Small animal PET is increasingly used for studying the animal model of disease, new drugs and new therapies. A prototype of Singles Processing Unit (SPU) for a small animal PET system was designed to obtain the time, energy, and position information. The energy and position is actually calculated through high precison charge measurement, which is based on amplification, shaping, A/D conversion and area calculation in digital signal processing domian. Analysis and simulations were also conducted to optimize the key parameters in system design. Initial tests indicate that the charge and time precision is better than 3‰ FWHM and 350 ps FWHM respectively, while the position resolution is better than 3.5‰ FWHM. Commination tests of the SPU prototype with the PET detector indicate that the system time precision is better than 2.5 ns, while the flood map and energy spectra concored well with the expected.

  13. Simulation of Radar-Backscattering from Phobos - A Contribution to the Experiment MARSIS aboard MarsExpress

    NASA Astrophysics Data System (ADS)

    Plettemeier, D.; Hahnel, R.; Hegler, S.; Safaeinili, A.; Orosei, R.; Cicchetti, A.; Plaut, J.; Picardi, G.

    2009-04-01

    MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) on board MarsExpress is the first and so far the only space borne radar that observed the Martian moon Phobos. Radar echoes were measured for different flyby trajectories. The primary aim of the low frequency sounding of Phobos is to prove the feasibility of deep sounding, into the crust of Phobos. In this poster we present a numerical method that allows a very precise computation of radar echoes backscattered from the surface of large objects. The software is based on a combination of physical optics calculation of surface scattering of the radar target, and Method of Moments to calculate the radiation pattern of the whole space borne radar system. The calculation of the frequency dependent radiation pattern takes into account all relevant gain variations and coupling effects aboard the space craft. Based on very precise digital elevation models of Phobos, patch models in the resolution of lambda/10 were generated. Simulation techniques will be explained and a comparison of simulations and measurements will be shown. SURFACE BACKSCATTERING SIMULATOR FOR LARGE OBJECTS The computation of surface scattering of the electromagnetic wave incident on Phobos is based on the Physical Optics method. The scattered field can be expressed by the induced equivalent surface currents on the target. The Algorithm: The simulation program itself is split into three phases. In the first phase, an illumination test checks whether a patch will be visible from the position of the space craft. If this is not the case, the patch will be excluded from the simulation. The second phase serves as a preparation stage for the third phase. Amongst other tasks, the dyadic products for the Js and Ms surface currents are calculated. This is a time-memory trade-off: the simulation will need additional 144 bytes of RAM for every patch that passes phase one. However, the calculation of the dyads is expensive, so that considerable savings in computation time can be achieved by pre-calculating the frequency independent parts. In the third phase, the main part of the calculation is executed. This involves calculating the backscattered field for every frequency step, with the selected frequency range and resolution, and source type. Requirements for the Simulation of Phobos: The model of Phobos contains more than 104 million patches, occupying about 12GiB of HD space. The model is saved as an HDF5 container file, allowing easy cross-platform portability. During the calculation, for every patch that passes the ray tracing test, nearly 400 bytes of RAM will be needed. That adds up to 40GB RAM, considering the worst case (computational-wise), making the simulation very memory intensive. This number is already an optimized case, due to memory reuse strategies. RESULTS The simulations were performed with a very high discretization based on a high resolution digital elevation model. In the results of the simulations the signatures in the radargrams are caused by the illuminated surface topography of Phobos, so that the precession of position and orientation of MarsExpress related to Phobos has a significant influence on the radargrams. Parameter studies have shown that a permittivity change causes only a brightness change in the radargrams, while a radial distance change will jolt the signatures of the radargrams along the time axis. That means that the small differences detected between simulations and measurements are probably caused by inaccuracies in the trajectory calculations regarding the position and orientation of Phobos. This interpretation is in line with the difference observed in the drop of bright lines in the measured and simulated radargrams during the gap in measurements, e.g. around closest approach for orbit 5851. Some other interesting aspect seen in the measurements can perhaps be explained by simulations. CONCLUSIONS We successfully implemented a Radar-Backscattering simulator, using a hybrid Physical Optics and Method of Moments approach. The software runs on a large scale cluster installation, and is able to produce precise results with a high resolution in a reasonable amount of time. We used this software to simulate the measurements of the MARSIS instrument aboard MarsExpress, during flybys over the Martian moon Phobos, with varying parameters regarding the antenna orientation and polarization. We have compared these results with actual measurements. These comparisons provide explanations for some unexpected effects seen in the measurements.

  14. A Double-Blinded Randomized Clinical Study on the Therapeutic Effect of Gastrografin in Prolonged Postoperative Ileus After Elective Colorectal Surgery.

    PubMed

    Biondo, Sebastiano; Miquel, Jordi; Espin-Basany, Eloy; Sanchez, Jose Luis; Golda, Thomas; Ferrer-Artola, Ana Maria; Codina-Cazador, Antonio; Frago, Ricardo; Kreisler, Esther

    2016-01-01

    Postoperative ileus is a common problem with significant clinical and economic consequences. We hypothesized that Gastrografin may have therapeutic utility by accelerating the recovery of postoperative ileus after colorectal surgery. The aim of this trial was to study the impact of oral Gastrografin administration on postoperative prolonged ileus (PPI) after elective colorectal surgery. The main endpoint of this randomized, double-blinded, controlled trial was time of resolution of PPI. The secondary endpoints were overall hospital length of stay, time to start oral intake, time to first passage of flatus or stools, time of need of nasogastric tube, and need of parenteral nutrition. Included criteria were patients older than 18 years, operated for colonic neoplasia, inflammatory bowel disease, or diverticular disease. There were two treatments: Gastrografin administration and placebo. The sample size was calculated taking into account the average length of postoperative ileus after colorectal resection until tolerance to oral intake. Statistical analysis showed that 29 subjects in each group were needed. Twenty-nine patients per group were randomized. Groups were comparable for age, gender, ASA Physical Status Classification System, stoma construction, and surgical technique. No statistical differences were observed in mean time to resolution between the two groups, 9.1 days (CI 95%, 6.51-11.68) in Gastrografin group versus 10.3 days (CI 6.96-10.29) in Placebo group (P = 0.878). Even if not statistically significant, time of resolution of PPI, overall length of stay, time of need of nasogastric tube, and time to tolerance of oral intake were shorter in the G group. Gastrografin does not accelerate significantly the recovery of prolonged postoperative ileus after elective colorectal resection when compared with placebo. However, it seems to clinically improve all the analyzed variables.

  15. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  16. Calibration and performance of a real-time gamma-ray spectrometry water monitor using a LaBr3(Ce) detector

    NASA Astrophysics Data System (ADS)

    Prieto, E.; Casanovas, R.; Salvadó, M.

    2018-03-01

    A scintillation gamma-ray spectrometry water monitor with a 2″ × 2″ LaBr3(Ce) detector was characterized in this study. This monitor measures gamma-ray spectra of river water. Energy and resolution calibrations were performed experimentally, whereas the detector efficiency was determined using Monte Carlo simulations with EGS5 code system. Values of the minimum detectable activity concentrations for 131I and 137Cs were calculated for different integration times. As an example of the monitor performance after calibration, a radiological increment during a rainfall episode was studied.

  17. Micro-computed tomography: Applications for high-resolution skeletal density determinations: An example using annually banded crustose coralline algae

    NASA Astrophysics Data System (ADS)

    Chan, P.; Halfar, J.; Norley, C. J. D.; Pollmann, S. I.; Adey, W.; Holdsworth, D. W.

    2017-09-01

    Warming and acidification of the world's oceans are expected to have widespread consequences for marine biodiversity and ecosystem functioning. However, due to the relatively short record of instrumental observations, one has to rely upon geochemical and physical proxy information stored in biomineralized shells and skeletons of calcareous marine organisms as in situ recorders of past environments. Of particular interest is the response of marine calcifiers to ocean acidification through the examination of structural growth characteristics. Here we demonstrate the application of micro-computed tomography (micro-CT) for three-dimensional visualization and analysis of growth, skeletal density, and calcification in a slow-growing, annually banded crustose coralline alga Clathromorphum nereostratum (increment width ˜380 µm). X-ray images and time series of skeletal density were generated at 20 µm resolution and rebinned to 40, 60, 80, and 100 µm for comparison in a sensitivity analysis. Calcification rates were subsequently calculated as the product of density and growth (linear extension). While both skeletal density and calcification rates do not significantly differ at varying spatial resolutions (the latter being strongly influenced by growth rates), clear visualization of micron-scale growth features and the quantification of structural changes on subannual time scales requires higher scanning resolutions. In the present study, imaging at 20 µm resolution reveals seasonal cycles in density that correspond to summer/winter variations in skeletal structure observed using scanning electron microscopy (SEM). Micro-CT is a fast, nondestructive, and high-resolution technique for structural and morphometric analyses of temporally banded paleoclimate archives, particularly those that exhibit slow or compressed growth or micron-scale structures.

  18. Validation and Temporal Analysis of Lai and Fapar Products Derived from Medium Resolution Sensor

    NASA Astrophysics Data System (ADS)

    Claverie, M.; Vermote, E. F.; Baret, F.; Weiss, M.; Hagolle, O.; Demarez, V.

    2012-12-01

    Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) have been defined as Essential Climate Variables. Many Earth surface monitoring applications are based on global estimation combined with a relatively high frequency. The medium spatial resolution sensors (MRS), such as SPOT-VGT, MODIS or MERIS, have been widely used to provide land surface products (mainly LAI and FAPAR) to the scientific community. These products require quality assessment and consistency. However, due to consistency of the ground measurements spatial sampling, the medium resolution is not appropriate for direct validation with in situ measurements sampling. It is thus more adequate to use high spatial resolution sensors which can integrate the spatial variability. The recent availability of combined high spatial (8 m) and temporal resolutions (daily) Formosat-2 data allows to evaluate the accuracy and the temporal consistency of medium resolution sensors products. In this study, we proposed to validate MRS products over a cropland area and to analyze their spatial and temporal consistency. As a matter of fact, this study belongs to the Stage 2 of the validation, as defined by the Land Product Validation sub-group of the Earth Observation Satellites. Reference maps, derived from the aggregation of Formosat-2 data (acquired during the 2006-2010 period over croplands in southwest of France), were compared with (i) two existing global biophysical variables products (GEOV1/VGT and MODIS-15 coll. 5), and (ii) a new product (MODdaily) derived from the inversion of PROSAIL radiative transfer model (EMMAH, INRA Avignon) applied on MODIS BRDF-corrected daily reflectance. Their uncertainty was calculated with 105 LAI and FAPAR reference maps, which uncertainties (22 % for LAI and 12% for FAPAR) were evaluated with in situ measurements performed over maize, sunflower and soybean. Inter-comparison of coarse resolution (0.05°) products showed that LAI and FAPAR have consistent phenology (Figure). The GEOLAND-2 showed the smoothest time series due to a 30-day composite, while MODdaily noise was satisfactory (<12%). The RMSE of LAI calculated for the period 2006-2010 were 0.46 for GEOV1/VGT, 0.19 for MODIS-15 and 0.16 for MODdaily. A significant overestimation (bias=0.43) of the LAI peak were observed for GEOV1/VGT products, while MOD-15 showed a small underestimation (bias=-0.14) of highest LAI. Finally, over a larger area (a quarter of France) covered by cropland, grassland and forest, the products displayed a good spatial consistency.; LAI 2006-2010 time-series of a coarse resolution pixel of cropland (extent in upper-left corner). Products are compared to Formosat-2 reference maps.

  19. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  20. Results from the NA62 Gigatracker Prototype: A Low-Mass and sub-ns Time Resolution Silicon Pixel Detector

    NASA Astrophysics Data System (ADS)

    Fiorini, M.; Rinella, G. Aglieri; Carassiti, V.; Ceccucci, A.; Gil, E. Cortina; Ramusino, A. Cotta; Dellacasa, G.; Garbolino, S.; Jarron, P.; Kaplon, J.; Kluge, A.; Marchetto, F.; Mapelli, A.; Martin, E.; Mazza, G.; Morel, M.; Noy, M.; Nuessle, G.; Petagna, P.; Petrucci, F.; Perktold, L.; Riedler, P.; Rivetti, A.; Statera, M.; Velghe, B.

    The Gigatracker (GTK) is a hybrid silicon pixel detector developed for NA62, the experiment aimed at studying ultra-rare kaon decays at the CERN SPS. Three GTK stations will provide precise momentum and angular measurements on every track of the high intensity NA62 hadron beam with a time-tagging resolution of 150 ps. Multiple scattering and hadronic interactions of beam particles in the GTK have to be minimized to keep background events at acceptable levels, hence the total material budget is fixed to 0.5% X0 per station. In addition the calculated fluence for 100 days of running is 2×1014 1 MeV neq/cm2, comparable to the one expected for the inner trackers of LHC detectors in 10 years of operation. These requirements pose challenges for the development of an efficient and low-mass cooling system, to be operated in vacuum, and on the thinning of read-out chips to 100 μm or less. The most challenging requirement is represented by the time resolution, which can be achieved by carefully compensating for the discriminator time-walk. For this purpose, two complementary read-out architectures have been designed and produced as small-scale prototypes: the first is based on the use of a Time-over-Threshold circuit followed by a TDC shared by a group of pixels, while the other uses a constant-fraction discriminator followed by an on-pixel TDC. The readout pixel ASICs are produced in 130 nm IBM CMOS technology and bump-bonded to 200 μm thick silicon sensors. The Gigatracker detector system is described with particular emphasis on recent experimental results obtained from laboratory and beam tests of prototype bump-bonded assemblies, which show a time resolution of less than 200 ps for single hits.

  1. Cis- and trans-perfluorodecalin: Infrared spectra, radiative efficiency and global warming potential

    NASA Astrophysics Data System (ADS)

    Le Bris, Karine; DeZeeuw, Jasmine; Godin, Paul J.; Strong, Kimberly

    2017-12-01

    Perfluorodecalin (PFD) is a molecule used in various medical applications for its capacity to dissolve gases. This potent greenhouse gas was detected for the first time in the atmosphere in 2005. We present infrared absorption cross-section spectra of a pure vapour of cis- and trans-perfluorodecalin at a resolution of 0.1 cm-1. Measurements were performed in the 560-3000 cm-1 spectral range using Fourier transform spectroscopy. The spectra have been compared with previous experimental data and theoretical calculations by density functional theory. The new experimental absorption cross-sections have been used to calculate a lifetime-corrected radiative efficiency at 300 K of 0.62 W m-2 ppb-1 and 0.57 W.m-2.ppb-1 for the cis and trans isomers respectively. This leads to a 100-year time horizon global warming potential of 8030 for cis-PFD and 7440 for trans-PFD.

  2. The use of earthquake rate changes as a stress meter at Kilauea volcano.

    PubMed

    Dieterich, J; Cayol, V; Okubo, P

    2000-11-23

    Stress changes in the Earth's crust are generally estimated from model calculations that use near-surface deformation as an observational constraint. But the widespread correlation of changes of earthquake activity with stress has led to suggestions that stress changes might be calculated from earthquake occurrence rates obtained from seismicity catalogues. Although this possibility has considerable appeal, because seismicity data are routinely collected and have good spatial and temporal resolution, the method has not yet proven successful, owing to the non-linearity of earthquake rate changes with respect to both stress and time. Here, however, we present two methods for inverting earthquake rate data to infer stress changes, using a formulation for the stress- and time-dependence of earthquake rates. Application of these methods at Kilauea volcano, in Hawaii, yields good agreement with independent estimates, indicating that earthquake rates can provide a practical remote-sensing stress meter.

  3. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  4. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE PAGES

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...

    2012-05-29

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  5. GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales

    USGS Publications Warehouse

    He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.

    2000-01-01

    To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.

  6. Anisotropic path modeling to assess pedestrian-evacuation potential from Cascadia-related tsunamis in the US Pacific Northwest

    USGS Publications Warehouse

    Wood, Nathan J.; Schmidtlein, Mathew C.

    2012-01-01

    Recent disasters highlight the threat that tsunamis pose to coastal communities. When developing tsunami-education efforts and vertical-evacuation strategies, emergency managers need to understand how much time it could take for a coastal population to reach higher ground before tsunami waves arrive. To improve efforts to model pedestrian evacuations from tsunamis, we examine the sensitivity of least-cost-distance models to variations in modeling approaches, data resolutions, and travel-rate assumptions. We base our observations on the assumption that an anisotropic approach that uses path-distance algorithms and accounts for variations in land cover and directionality in slope is the most realistic of an actual evacuation landscape. We focus our efforts on the Long Beach Peninsula in Washington (USA), where a substantial residential and tourist population is threatened by near-field tsunamis related to a potential Cascadia subduction zone earthquake. Results indicate thousands of people are located in areas where evacuations to higher ground will be difficult before arrival of the first tsunami wave. Deviations from anisotropic modeling assumptions substantially influence the amount of time likely needed to reach higher ground. Across the entire study, changes in resolution of elevation data has a greater impact on calculated travel times than changes in land-cover resolution. In particular areas, land-cover resolution had a substantial impact when travel-inhibiting waterways were not reflected in small-scale data. Changes in travel-speed parameters had a substantial impact also, suggesting the importance of public-health campaigns as a tsunami risk-reduction strategy.

  7. The effects of digital elevation model resolution on the calculation and predictions of topographic wetness indices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drover, Damion, Ryan

    2011-12-01

    One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would thereforemore » be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a laser altimetry remote sensing method, obtained from the USDA Forest Service at Savannah River Site. The specific DEM resolutions were chosen because they are common grid cell sizes (10m, 30m, and 50m) used in mapping for management applications and in research. The finer resolutions (2m and 5m) were chosen for the purpose of determining how finer resolutions performed compared with coarser resolutions at predicting wetness and related soil attributes. The wetness indices were compared across DEMs and with each other in terms of quantile and distribution differences, then in terms of how well they each correlated with measured soil attributes. Spatial and non-spatial analyses were performed, and predictions using regression and geostatistics were examined for efficacy relative to each DEM resolution. Trends in the raw data and analysis results were also revealed.« less

  8. Stellar Laboratories: 3. New Ba 5, Ba 6, and Ba 7 Oscillator Strengths and the Barium Abundance in the Hot White Dwarfs G191-B2B and RE 0503-289

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Werner, K.; Quinet, P.; Kruk, Jeffrey Walter

    2014-01-01

    Context. For the spectral analysis of high-resolution and high-signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Aims. Reliable Ba 5-7 oscillator strengths are used to identify Ba lines in the spectra of the DA-type white dwarf G191-B2B and the DO-type white dwarf RE 0503-289 and to determine their photospheric Ba abundances. Methods. We newly calculated Ba v-vii oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Ba lines exhibited in high-resolution and high-S/N UV observations of G191-B2B and RE 0503-289. Results. For the first time, we identified highly ionized Ba in the spectra of hot white dwarfs. We detected Ba vi and Ba vii lines in the Far Ultraviolet Spectroscopic Explorer (FUSE) spectrum of RE 0503-289. The Ba vi/Ba vii ionization equilibrium is well reproduced with the previously determined effective temperature of 70 000 K and surface gravity of log g=7.5. The Ba abundance is 3.5 +/- 0.5 × 10(exp-4) (mass fraction, about 23 000 times the solar value). In the FUSE spectrum of G191-B2B, we identified the strongest Ba vii line (at 993.41 Å) only, and determined a Ba abundance of 4.0 +/- 0.5 × 10(exp-6) (about 265 times solar). Conclusions. Reliable measurements and calculations of atomic data are a pre-requisite for stellar-atmosphere modeling. Observed Ba vi-vii line profiles in two white dwarfs' (G191-B2B and RE 0503-289) far-ultraviolet spectra were well reproduced with our newly calculated oscillator strengths. This allowed to determine the photospheric Ba abundance of these two stars precisely.

  9. Regional-Scale Differential Time Tomography Methods: Development and Application to the Sichuan, China, Dataset

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C.; Wang, W.; Roecker, S. W.

    2008-12-01

    We extended our recent development of double-difference seismic tomography [Zhang and Thurber, BSSA, 2003] to the use of station-pair residual differences in addition to event-pair residual differences. Tomography using station- pair residual differences is somewhat akin to teleseismic tomography but with the sources contained within the model region. Synthetic tests show that the inversion using both event- and station-pair residual differences has advantages in terms of more accurately recovering higher-resolution structure in both the source and receiver regions. We used the Spherical-Earth Finite-Difference (SEFD) travel time calculation method in the tomographic system. The basic concept is the extension of a standard Cartesian FD travel time algorithm [Vidale, 1990] to the spherical case by developing a mesh in radius, co-latitude, and longitude, expressing the FD derivatives in a form appropriate to the spherical mesh, and constructing"stencil" to calculate extrapolated travel times. The SEFD travel time calculation method is more advantageous in dealing with heterogeneity and sphericity of the Earth than the simple Earth flattening transformation and the"sphere-in-a-bo" approach [Flanagan et al., 2007]. We applied this method to the Sichuan, China data set for the period of 2001 to 2004. The Vp, Vs and Vp/Vs models show that there is a clear contrast across the Longmenshan Fault, where the 2008 M8 Wenchuan earthquake initiated.

  10. Calculation of selective filters of a device for primary analysis of speech signals

    NASA Astrophysics Data System (ADS)

    Chudnovskii, L. S.; Ageev, V. M.

    2014-07-01

    The amplitude-frequency responses of filters for primary analysis of speech signals, which have a low quality factor and a high rolloff factor in the high-frequency range, are calculated using the linear theory of speech production and psychoacoustic measurement data. The frequency resolution of the filter system for a sinusoidal signal is 40-200 Hz. The modulation-frequency resolution of amplitude- and frequency-modulated signals is 3-6 Hz. The aforementioned features of the calculated filters are close to the amplitudefrequency responses of biological auditory systems at the level of the eighth nerve.

  11. Solar Tyrol project: using climate data for energy production estimation. The good practice of Tyrol in conceptualizing climate services.

    NASA Astrophysics Data System (ADS)

    Petitta, Marcello; Wagner, Jochen; Costa, Armin; Monsorno, Roberto; Innerebner, Markus; Moser, David; Zebisch, Marc

    2014-05-01

    The scientific community in the last years is largely discussing the concept of "Climate services". Several definitions have been used, but it still remains a rather open concept. We used climate data from analysis and reanalysis to create a daily and hourly model of atmospheric turbidity in order to account the effect of the atmosphere on incoming solar radiation with the final aim of estimating electric production from Photovoltaic (PV) Modules in the Alps. Renewable Energy production in the Alpine Region is dominated by hydroelectricity, but the potential for photovoltaic energy production is gaining momentum. Especially the southern part of the Alps and inner Alpine regions offer good conditions for PV energy production. The combination of high irradiance values and cold air temperature in mountainous regions is well suited for solar cells. To enable more widespread currency of PV plants, PV has to become an important part in regional planning. To provide regional authorities and also private stakeholders with high quality PV energy yield climatology in the provinces of Bolzano/Bozen South Tirol (Italy) and Tyrol (Austria), the research project Solar Tyrol was inaugurated in 2012. Several methods are used to calculate very high resolution maps of solar radiation. Most of these approaches use climatological values. In this project we reconstructed the last 10 years of atmospheric turbidity using reanalysis and operational data in order to better estimate incoming solar radiation in the alpine region. Our method is divided into three steps: i) clear sky radiation: to estimate the atmospheric effect on solar radiation we calculated Linke Turbidity factor using aerosols optical depth (AOD), surface albedo, atmospheric pressure, and total water content from ECMWF and MACC analysis. ii) shadows: we calculated shadows of mountains and buildings using a 2 meter-resolution digital elevation model of the area and GIS module r.sun modified to fit our specific needs. iii) Clouds effect: clear-sky irradiance is modified using cloud index provided by Meteoswiss with very high temporal resolution (15 min within 2004 and 2012). These three steps produce daily (eventually hourly) dataset of incoming solar radiation at 25 m of horizontal resolution for the entire Tyrol region reaching 2 m horizontal resolution for the inhabited areas . The final steps provide the potential electric energy production assuming the presence of two PV technologies: cadmium telluride and polycrystalline silicon. In this case the air temperature data have been used to include the temperture-efficency factor in the PV modules. Results shows an improved accuracy in estimated incoming solar radiation compared to the standard methods used due to clouds and atmospheric turbidity calculation used in our method. Moreover we set a specific method to estimate shadows effects of close and far objects: the problem is in adopting an appropriate horizontal resolution and maintain the calculation time for the entire geographical domain relatively low. Our methods allow estimating the correct horizontal resolution for the area given the digital elevation model of the region. Finally a web-based-GIS interface has been set up to display the data to public and a spatial database has been developed to handle the large amount of data. The current results of our project demonstrate how is possible to use scientific know-how and climate products to provide relevant and simple-to-use information to stake holder and political bodies. Moreover our approach show how is possible to have a relevant impact in current political and economical fields associated to local energy production and planning.

  12. Calculations of Arctic ozone chemistry using objectively analyzed data in a 3-D CTM

    NASA Technical Reports Server (NTRS)

    Kaminski, J. W.; Mcconnell, J. C.; Sandilands, J. W.

    1994-01-01

    A three-dimensional chemical transport model (CTM) (Kaminski, 1992) has been used to study the evolution of the Arctic ozone during the winter of 1992. The continuity equation has been solved using a spectral method with Rhomboidal 15 (R15) truncation and leap-frog time stepping. Six-hourly meteorological fields from the Canadian Meteorological Center global objective analysis routines run at T79 were degraded to the model resolution. In addition, they were interpolated to the model time grid and were used to drive the model from the surface to 10 mb. In the model, processing of Cl(x) occurred over Arctic latitudes but some of the initial products were still present by mid-January. Also, the large amounts of ClO formed in the model in early January were converted to ClNO3. The results suggest that the model resolution may be insufficient to resolve the details of the Arctic transport during this time period. In particular, the wind field does not move the ClO(x) 'cloud' to the south over Europe as seen in the MLS measurements.

  13. Use of global positioning system measurements to determine geocentric coordinates and variations in Earth orientation

    NASA Technical Reports Server (NTRS)

    Malla, R. P.; Wu, S.-C.; Lichten, S. M.

    1993-01-01

    Geocentric tracking station coordinates and short-period Earth-orientation variations can be measured with Global Positioning System (GPS) measurements. Unless calibrated, geocentric coordinate errors and changes in Earth orientation can lead to significant deep-space tracking errors. Ground-based GPS estimates of daily and subdaily changes in Earth orientation presently show centimeter-level precision. Comparison between GPS-estimated Earth-rotation variations, which are the differences between Universal Time 1 and Universal Coordinated Time (UT1-UTC), and those calculated from ocean tide models suggests that observed subdaily variations in Earth rotation are dominated by oceanic tidal effects. Preliminary GPS estimates for the geocenter location (from a 3-week experiment) agree with independent satellite laser-ranging estimates to better than 10 cm. Covariance analysis predicts that temporal resolution of GPS estimates for Earth orientation and geocenter improves significantly when data collected from low Earth-orbiting satellites as well as from ground sites are combined. The low Earth GPS tracking data enhance the accuracy and resolution for measuring high-frequency global geodynamical signals over time scales of less than 1 day.

  14. Central sleep apnea detection from ECG-derived respiratory signals. Application of multivariate recurrence plot analysis.

    PubMed

    Maier, C; Dickhaus, H

    2010-01-01

    This study examines the suitability of recurrence plot analysis for the problem of central sleep apnea (CSA) detection and delineation from ECG-derived respiratory (EDR) signals. A parameter describing the average length of vertical line structures in recurrence plots is calculated at a time resolution of 1 s as 'instantaneous trapping time'. Threshold comparison of this parameter is used to detect ongoing CSA. In data from 26 patients (duration 208 h) we assessed sensitivity for detection of CSA and mixed apnea (MSA) events by comparing the results obtained from 8-channel Holter ECGs to the annotations (860 CSA, 480 MSA) of simultaneously registered polysomnograms. Multivariate combination of the EDR from different ECG leads improved the detection accuracy significantly. When all eight leads were considered, an average instantaneous vertical line length above 5 correctly identified 1126 of the 1340 events (sensitivity 84%) with a total number of 1881 positive detections. We conclude that recurrence plot analysis is a promising tool for detection and delineation of CSA epochs from EDR signals with high time resolution. Moreover, the approach is likewise applicable to directly measured respiratory signals.

  15. High Resolution Diffusion Tensor Imaging of Cortical-Subcortical White Matter Tracts in TBI

    DTIC Science & Technology

    2009-10-01

    other words, CT perfusion is a change in CT intensity (or Hounsfield Unit , HU) over time following a bolus of iodine based contrast agent. Although...E-Mail: little@uic.edu 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT...estimates of the eigenvalues and decrease the signal-to-noise ratio, a background noise level of 125 (MR Units ) was applied prior to calculation of

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaut, Arkadiusz

    We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.

  17. A network of superconducting gravimeters detects submicrogal coseismic gravity changes.

    PubMed

    Imanishi, Yuichi; Sato, Tadahiro; Higashi, Toshihiro; Sun, Wenke; Okubo, Shuhei

    2004-10-15

    With high-resolution continuous gravity recordings from a regional network of superconducting gravimeters, we have detected permanent changes in gravity acceleration associated with a recent large earthquake. Detected changes in gravity acceleration are smaller than 10(-8) meters seconds(-2) (1 micro-Galileo, about 10(-9) times the surface gravity acceleration) and agree with theoretical values calculated from a dislocation model. Superconducting gravimetry can contribute to the studies of secular gravity changes associated with tectonic processes.

  18. Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.

    PubMed

    Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin

    2018-02-09

    Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.

  19. Cherenkov radiation-based three-dimensional position-sensitive PET detector: A Monte Carlo study.

    PubMed

    Ota, Ryosuke; Yamada, Ryoko; Moriya, Takahiro; Hasegawa, Tomoyuki

    2018-05-01

    Cherenkov radiation has recently received attention due to its prompt emission phenomenon, which has the potential to improve the timing performance of radiation detectors dedicated to positron emission tomography (PET). In this study, a Cherenkov-based three-dimensional (3D) position-sensitive radiation detector was proposed, which is composed of a monolithic lead fluoride (PbF 2 ) crystal and a photodetector array of which the signals can be readout independently. Monte Carlo simulations were performed to estimate the performance of the proposed detector. The position- and time resolution were evaluated under various practical conditions. The radiator size and various properties of the photodetector, e.g., readout pitch and single photon timing resolution (SPTR), were parameterized. The single photon time response of the photodetector was assumed to be a single Gaussian for the simplification. The photo detection efficiency of the photodetector was ideally 100% for all wavelengths. Compton scattering was included in simulations, but partly analyzed. To estimate the position at which a γ-ray interacted in the Cherenkov radiator, the center-of-gravity (COG) method was employed. In addition, to estimate the depth-of-interaction (DOI) principal component analysis (PCA), which is a multivariate analysis method and has been used to identify the patterns in data, was employed. The time-space distribution of Cherenkov photons was quantified to perform PCA. To evaluate coincidence time resolution (CTR), the time difference of two independent γ-ray events was calculated. The detection time was defined as the first photon time after the SPTR of the photodetector was taken into account. The position resolution on the photodetector plane could be estimated with high accuracy, by using a small number of Cherenkov photons. Moreover, PCA showed an ability to estimate the DOI. The position resolution heavily depends on the pitch of the photodetector array and the radiator thickness. If the readout pitch were ideally 0 and practically 3 mm, a full-width at half-maximum (FWHM) of 0.348 and 1.92 mm was achievable with a 10-mm-thick PbF 2 crystal, respectively. Furthermore, first-order correlation could be observed between the primary principal component and the true DOI. To obtain a coincidence timing resolution better than 100-ps FWHM with a 20-mm-thick PbF 2 crystal, a photodetector with SPTR of better than σ = 30 ps was necessary. From these results, the improvement of SPTR allows us to achieve CTR better than 100-ps FWHM, even in the case where a 20-mm-thick radiator is used. Our proposed detector has the potential to estimate the 3D interaction position of γ-rays in the radiator, using only time and space information of Cherenkov photons. © 2018 American Association of Physicists in Medicine.

  20. Dynamical downscaling of wind fields for wind power applications

    NASA Astrophysics Data System (ADS)

    Mengelkamp, H.-T.; Huneke, S.; Geyer, J.

    2010-09-01

    Dynamical downscaling of wind fields for wind power applications H.-T. Mengelkamp*,**, S. Huneke**, J, Geyer** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH Investments in wind power require information on the long-term mean wind potential and its temporal variations on daily to annual and decadal time scales. This information is rarely available at specific wind farm sites. Short-term on-site measurements usually are only performed over a 12 months period. These data have to be set into the long-term perspective through correlation to long-term consistent wind data sets. Preliminary wind information is often asked for to select favourable wind sites over regional and country wide scales. Lack of high-quality wind measurements at weather stations was the motivation to start high resolution wind field simulations The simulations are basically a refinement of global scale reanalysis data by means of high resolution simulations with an atmospheric mesoscale model using high-resolution terrain and land-use data. The 3-dimensional representation of the atmospheric state available every six hours at 2.5 degree resolution over the globe, known as NCAR/NCEP reanalysis data, forms the boundary conditions for continuous simulations with the non-hydrostatic atmospheric mesoscale model MM5. MM5 is nested in itself down to a horizontal resolution of 5 x 5 km². The simulation is performed for different European countries and covers the period 2000 to present and is continuously updated. Model variables are stored every 10 minutes for various heights. We have analysed the wind field primarily. The wind data set is consistent in space and time and provides information on the regional distribution of the long-term mean wind potential, the temporal variability of the wind potential, the vertical variation of the wind potential, and the temperature, and pressure distribution (air density). In the context of wind power these data are used • as an initial estimate of wind and energy potential • for the long-term correlation of wind measurements and turbine production data • to provide wind potential maps on a regional to country wide scale • to provide input data sets for simulation models • to determine the spatial correlation of the wind field in portfolio calculations • to calculate the wind turbine energy loss during prescribed downtimes • to provide information on the temporal variations of the wind and wind turbine energy production The time series of wind speed and wind direction are compared to measurements at offshore and onshore locations.

  1. A trade-off between model resolution and variance with selected Rayleigh-wave data

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.

  2. Observations and Modeling of Composition of Upper Troposphere/Lower Stratosphere (UTILS): Isentropic Mixing Events and Morphology of HNO3 as Observed by HIRDLS and Comparison with Results from Global Modeling Initiative

    NASA Technical Reports Server (NTRS)

    Rodriquez, J. M.; Douglass, A.R.; Yoshida, Y.; Strahan, S.; Duncan, B.; Olsen, M.; Gille, J.; Yudin, V.; Nardi, B.

    2008-01-01

    isentropic exchange of air masses between the tropical upper troposphere and mid-latitude lowermost stratosphere (the so-called "middle world") is an important pathway for stratospheric-tropospheric exchange. A seasonal, global view of this process has been difficult to obtain, in part due to the lack of the vertical resolution in satellite observations needed to capture the laminar character of these events. Ozone observations at a resolution of about 1 km from the High Resolution Dynamic Limb Sounder (HIRDLS) on NASA's Aura satellite show instances of these intrusions. Such intrusions should also be observable in HN03 observations; however, the abundances of nitric acid could be additionally controlled by chemical processes or incorporation and removal into ice clouds. We present a systematic examination of the HIRDLS data on O3 and HNO3 to determine the seasonal and spatial characteristics of the distribution of isentropic intrusions. At the same time, we compare the observed distributions with those calculated by the Global Modeling Initiative combined tropospheric-stratospheric model, which has a vertical resolution of about I km. This Chemical Transport Model (CTM) is driven by meteorological fields obtained from the GEOS-4 system of NASA/Goddard Global Modeling and Assimilation Office (GMAO), for the Aura time period, at a vertical resolution of about 1 km. Such comparison brings out the successes and limitations of the model in representing isentropic stratospheric-tropospheric exchange, and the different processes controlling HNO3 in the UTAS.

  3. Mapping Spatial Distributions of Stream Power and Channel Change along a Gravel-Bed River in Northern Yellowstone

    NASA Astrophysics Data System (ADS)

    Lea, D. M.; Legleiter, C. J.

    2014-12-01

    Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing. This study used remotely sensed data and field measurements to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8 km reach. Aerial photographs from 1994-2012 and cross-section surveys were used to assess lateral channel mobility and develop a morphologic sediment budget for quantifying net sediment flux for a series of budget cells. A drainage area-to-discharge relationship and digital elevation model (DEM) developed from LiDAR data were used to obtain the discharge and slope values, respectively, needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of channel mobility and sediment transfer. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volume of sediment eroded or deposited during each time increment. Our results indicated a lack of strong correlation between stream power gradients and sediment flux, which we attributed to the geomorphic complexity of the Soda Butte Creek watershed and the inability of our relatively simple statistical approach to link sediment dynamics expressed at a sub-budget cell scale to larger-scale driving forces such as stream power gradients. Future studies should compare the moderate spatial resolution techniques used in this study to very-high resolution data acquired from new fluvial remote sensing technologies to better understand the amount of error associated with stream power, sediment transport, and channel change calculated from historical datasets.

  4. JMA's regional atmospheric transport model calculations for the WMO technical task team on meteorological analyses for Fukushima Daiichi Nuclear Power Plant accident.

    PubMed

    Saito, Kazuo; Shimbori, Toshiki; Draxler, Roland

    2015-01-01

    The World Meteorological Organization (WMO) convened a small technical task team of experts to produce a set of meteorological analyses to drive atmospheric transport, dispersion and deposition models (ATDMs) for the United Nations Scientific Committee on the Effects of Atomic Radiation's assessment of the Fukushima Daiichi Nuclear Power Plant (DNPP) accident. The Japan Meteorological Agency (JMA) collaborated with the WMO task team as the regional specialized meteorological center of the country where the accident occurred, and provided its operational 5-km resolution mesoscale (MESO) analysis and its 1-km resolution radar/rain gauge-analyzed precipitation (RAP) data. The JMA's mesoscale tracer transport model was modified to a regional ATDM for radionuclides (RATM), which included newly implemented algorithms for dry deposition, wet scavenging, and gravitational settling of radionuclide aerosol particles. Preliminary and revised calculations of the JMA-RATM were conducted according to the task team's protocol. Verification against Cesium 137 ((137)Cs) deposition measurements and observed air concentration time series showed that the performance of RATM with MESO data was significantly improved by the revisions to the model. The use of RAP data improved the (137)Cs deposition pattern but not the time series of air concentrations at Tokai-mura compared with calculations just using the MESO data. Sensitivity tests of some of the more uncertain parameters were conducted to determine their impacts on ATDM calculations, and the dispersion and deposition of radionuclides on 15 March 2011, the period of some of the largest emissions and deposition to the land areas of Japan. The area with high deposition in the northwest of Fukushima DNPP and the hotspot in the central part of Fukushima prefecture were primarily formed by wet scavenging influenced by the orographic effect of the mountainous area in the west of the Fukushima prefecture. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. CO tip functionalization in subatomic resolution atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Chelikowsky, James R.

    2015-10-01

    Noncontact atomic force microscopy (nc-AFM) employing a CO-functionalized tip displays dramatically enhanced resolution wherein covalent bonds of polycyclic aromatic hydrocarbon can be imaged. Employing real-space pseudopotential first-principles calculations, we examine the role of CO in functionalizing the nc-AFM tip. Our calculations allow us to simulate full AFM images and ascertain the enhancement mechanism of the CO molecule. We consider two approaches: one with an explicit inclusion of the CO molecule and one without. By comparing our simulations to existing experimental images, we ascribe the enhanced resolution of the CO functionalized tip to the special orbital characteristics of the CO molecule.

  6. CO tip functionalization in subatomic resolution atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Minjung; Chelikowsky, James R.

    2015-10-19

    Noncontact atomic force microscopy (nc-AFM) employing a CO-functionalized tip displays dramatically enhanced resolution wherein covalent bonds of polycyclic aromatic hydrocarbon can be imaged. Employing real-space pseudopotential first-principles calculations, we examine the role of CO in functionalizing the nc-AFM tip. Our calculations allow us to simulate full AFM images and ascertain the enhancement mechanism of the CO molecule. We consider two approaches: one with an explicit inclusion of the CO molecule and one without. By comparing our simulations to existing experimental images, we ascribe the enhanced resolution of the CO functionalized tip to the special orbital characteristics of the CO molecule.

  7. Learning to love the rain in Bergen (Norway) and other lessons from a Climate Services neophyte

    NASA Astrophysics Data System (ADS)

    Sobolowski, Stefan; Wakker, Joyce

    2014-05-01

    A question that is often asked of regional climate modelers generally, and Climate Service providers specifically, is: "What is the added-value of regional climate simulations and how can I use this information?" The answer is, unsurprisingly, not straightforward and depends greatly on what one needs to know. In particular it is important for scientist to communicate directly with the users of this information to determine what kind of information is important for them to do their jobs. This study is part of the ECLISE project (Enabling Climate Information Services for Europe) and involves a user at the municipality of Bergen's (Norway) water and drainage administration and a provider from Uni Research and the Bjerknes Center for Climate Research. The water and drain administration is responsible for communicating potential future changes in extreme precipitation, particularly short-term high-intensity rainfall, which is common in Bergen and making recommendations to the engineering department for changes in design criteria. Thus, information that enables better decision-making is crucial. This study then actually has two relevant components for climate services: 1) is a scientific exercise to evaluate the performance of high resolution regional climate simulations and their ability to reproduce high intensity short duration precipitation and 2) an exercise in communication between a provider community and user community with different concerns, mandates, methodological approaches and even vocabularies. A set of Weather Research and Forecasting (WRF) simulations was run at high resolution (8km) over a large domain covering much of Scandinavia and Northern Europe. One simulation was driven by so-called "perfect" boundary conditions taken from reanalysis data (ERA-interim, 1989-2010) the second and third simulations used Norway's global climate model as boundary forcing (NorESM) and were run for a historical period (1950-2005) and a 30yr. end of the century time slice under the rcp4.5 "middle of the road" emissions scenario (2071-2100). A unique feature of the WRF modeling system is the ability to write data for selected locations at every time step, thus creating time series of very high temporal resolution which can be compared to observations. This high temporal resolution also allowed us to directly calculate intensity-duration-frequency (IDF) curves for intense precipitation of short to long duration (5 minutes - 1 day) for a number of return periods (2-100 years) with out resorting to factors to calculate rainfall intensities at higher temporal resolutions, as is commonly done. We investigated the IDF curves using a number of parametric and non-parametric approaches. Given the relatively short time periods of the modeled data the standard Gumble approach is presented here. This is also done to maintain consistency with previous calculations by the water and drain administration. Curves were also generated from observed time series at two locations in Bergen. Both the historical, GCM-driven simulation and the ERA-interim driven simulation closely match the observed IDF curves for all return periods up to durations of about 10 minutes where WRF then fails to reproduce the very short, very high intensity events. IDF curves under future conditions were also generated and the changes were compared with the current standard approach of applying climate change-factors to observed extreme precipitation in order to account for structural errors in global and regional climate models. Our investigation suggests that high-resolution regional simulations can capture many of the topographic features and dynamical processes necessary to accurately model extreme rainfall, even in at highly local scales and over complex terrain such as Bergen, Norway. The exercise also produced many lessons for climate service providers and users alike.

  8. Seismic Tomography Of The Caucasus Region

    NASA Astrophysics Data System (ADS)

    Javakhishvili, Z.; Godoladze, T.; Gok, R.; Elashvili, M.

    2007-12-01

    The Caucasus is one of the most active segments of the Alpine-Himalayan collision belt. We used the catalog data of Georgian Seismic Network to calculate the reference 1-D and 3-D P-velocity model of the Caucasus region. The analog recording period in Georgia was quite long and 17,000 events reported in the catalog between 1956 and 1990. We carefully eliminated some arrivals due to ambiguities for analog type data picking and station time corrections. We choose arrivals with comparably low residuals between observed and calculated travel times (<1 sec). We also limited our data to minimum 10 P-arrivals and maximum azimuthal gap of 180 degrees. Finally,475 events were selected with magnitude greater than 1.5 recorded by 84 stations. We obtained good resolution down to 70 km. First, we used 1-D coupled inversion algorithm (VELEST) to calculate the velocity model and the relocations. The same model convergence is observed for the mid and lower crust. The upper layer (0-10km) is observed to be sensitive to the starting model. We used vertical seismic prospecting data from boreholes in Georgia to fix upper layer velocities. We relocated all events in the region using the new reference 1- D velocity model. The 3-D coupled inversion algorithm (SIMULPS14) was applied using the 1-D reference model as a starting model. We observed very large amount of shift at horizontal directions (up to 50 km). We observed clustered events where they are well correlated with query blasts from Tkibuli mining area. We applied the resolution test to estimate the spatial resolution of the tomographic images. The results of the test indicate that the initial model is well reconstructed for all depth slices, though it is badly reconstructed for the shallowest layer (with depth = 5km). The Moho geometry beneath Caucasus has been determined reliably by the previous geophysical studies. It has a relatively large depth variation in this region from 28 to 61 km depth, according to those studies and our tomography result for the uppermost mantle (50 km) reflects this depth variation of the Moho discontinuity.

  9. Determination of the design space of the HPLC analysis of water-soluble vitamins.

    PubMed

    Wagdy, Hebatallah A; Hanafi, Rasha S; El-Nashar, Rasha M; Aboul-Enein, Hassan Y

    2013-06-01

    Analysis of water-soluble vitamins has been tremendously approached through the last decades. A multitude of HPLC methods have been reported with a variety of advantages/shortcomings, yet, the design space of HPLC analysis of these vitamins was not defined in any of these reports. As per the food and drug administration (FDA), implementing the quality by design approach for the analysis of commercially available mixtures is hypothesized to enhance the pharmaceutical industry via facilitating the process of analytical method development and approval. This work illustrates a multifactorial optimization of three measured plus seven calculated influential HPLC parameters on the analysis of a mixture containing seven common water-soluble vitamins (B1, B2, B6, B12, C, PABA, and PP). These three measured parameters are gradient time, temperature, and ternary eluent composition (B1/B2) and the seven calculated parameters are flow rate, column length, column internal diameter, dwell volume, extracolumn volume, %B (start), and %B (end). The design is based on 12 experiments in which, examining of the multifactorial effects of these 3 + 7 parameters on the critical resolution and selectivity, was carried out by systematical variation of all these parameters simultaneously. The 12 basic runs were based on two different gradient time each at two different temperatures, repeated at three different ternary eluent compositions (methanol or acetonitrile or a mixture of both). Multidimensional robust regions of high critical R(s) were defined and graphically verified. The optimum method was selected based on the best resolution separation in the shortest run time for a synthetic mixture, followed by application on two pharmaceutical preparations available in the market. The predicted retention times of all peaks were found to be in good match with the virtual ones. In conclusion, the presented report offers an accurate determination of the design space for critical resolution in the analysis of water-soluble vitamins by HPLC, which would help the regulatory authorities to judge the validity of presented analytical methods for approval. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Age accuracy and resolution of Quaternary corals used as proxies for sea level

    NASA Astrophysics Data System (ADS)

    Edinger, E. N.; Burr, G. S.; Pandolfi, J. M.; Ortiz, J. C.

    2007-01-01

    The accuracy of global eustatic sea level curves measured from raised Quaternary reefs, using radiometric ages of corals at known heights, may be limited by time-averaging, which affects the variation in coral age at a given height. Time-averaging was assessed in uplifted Holocene reef sequences from the Huon Peninsula, Papua New Guinea, using radiocarbon dating of coral skeletons in both horizontal transects and vertical sequences. Calibrated 2σ age ranges varied from 800 to 1060 years along horizontal transects, but weighted mean ages calculated from 15-18 dates per horizon were accurate to a resolution within 154-214 yr. Approximately 40% of the variability in age estimate resulted from internal variability inherent to 14C estimates, and 60% was due to time-averaging. The accuracy of age estimates of sea level change in studies using single dated corals as proxies for sea level is probably within 1000 yr of actual age, but can be resolved to ≤ 250 yr if supported by dates from analysis of a statistical population of corals at each stratigraphic interval. The range of time-averaging among reef corals was much less than that for shelly benthos. Ecological time-averaging dominated over sedimentological time averaging for reef corals, opposite to patterns reported from shelly benthos in siliciclastic environments.

  11. Stellar Laboratories: New GeV and Ge VI Oscillator Strengths and their Validation in the Hot White Dwarf RE0503-289

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Werner, K.; Biemont, E.; Quinet, P.; Kruk, J. W.

    2013-01-01

    State-of-the-art spectral analysis of hot stars by means of non-LTE model-atmosphere techniques has arrived at a high level of sophistication. The analysis of high-resolution and high-S/N spectra, however, is strongly restricted by the lack of reliable atomic data for highly ionized species from intermediate-mass metals to trans-iron elements. Especially data for the latter has only been sparsely calculated. Many of their lines are identified in spectra of extremely hot, hydrogen-deficient post-AGB stars. A reliable determination of their abundances establishes crucial constraints for AGB nucleosynthesis simulations and, thus, for stellar evolutionary theory. Aims. In a previous analysis of the UV spectrum of RE 0503-289, spectral lines of highly ionized Ga, Ge, As, Se, Kr, Mo, Sn, Te, I, and Xe were identified. Individual abundance determinations are hampered by the lack of reliable oscillator strengths. Most of these identified lines stem from Ge V. In addition, we identified Ge VI lines for the first time. We calculated Ge V and Ge VI oscillator strengths in order to reproduce the observed spectrum. Methods. We newly calculated Ge V and Ge VI oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our non-LTE stellar-atmosphere models for the analysis of the Ge IV-VI spectrum exhibited in high-resolution and high-S/N FUV (FUSE) and UV (ORFEUS/BEFS, IUE) observations of RE 0503-289. Results. In the UV spectrum of RE 0503-289, we identify four Ge IV, 37 Ge V, and seven Ge VI lines. Most of these lines are identified for the first time in any star. We can reproduce almost all Ge IV, GeV, and Ge VI lines in the observed spectrum of RE 0503-289 (T(sub eff) = 70 kK, log g = 7.5) at log Ge = -3.8 +/- 0.3 (mass fraction, about 650 times solar). The Ge IV/V/VI ionization equilibrium, that is a very sensitive T(sub eff) indicator, is reproduced well. Conclusions. Reliable measurements and calculations of atomic data are a prerequisite for stellar-atmosphere modeling. Our oscillator-strength calculations have allowed, for the first time, Ge V and Ge VI lines to be successfully reproduced in a white dwarf s (RE 0503-289) spectrum and to determine its photospheric Ge abundance.

  12. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  13. Silicon Oxysulfide, OSiS: Rotational Spectrum, Quantum-Chemical Calculations, and Equilibrium Structure.

    PubMed

    Thorwirth, Sven; Mück, Leonie Anna; Gauss, Jürgen; Tamassia, Filippo; Lattanzi, Valerio; McCarthy, Michael C

    2011-06-02

    Silicon oxysulfide, OSiS, and seven of its minor isotopic species have been characterized for the first time in the gas phase at high spectral resolution by means of Fourier transform microwave spectroscopy. The equilibrium structure of OSiS has been determined from the experimental data using calculated vibration-rotation interaction constants. The structural parameters (rO-Si = 1.5064 Å and rSi-S = 1.9133 Å) are in very good agreement with values from high-level quantum chemical calculations using coupled-cluster techniques together with sophisticated additivity and extrapolation schemes. The bond distances in OSiS are very short in comparison with those in SiO and SiS. This unexpected finding is explained by the partial charges calculated for OSiS via a natural population analysis. The results suggest that electrostatic effects rather than multiple bonding are the key factors in determining bonding in this triatomic molecule. The data presented provide the spectroscopic information needed for radio astronomical searches for OSiS.

  14. Monitoring the state of vegetation in Hungary using 15 years long MODIS Data

    NASA Astrophysics Data System (ADS)

    Kern, Anikó; Bognár, Péter; Pásztor, Szilárd; Barcza, Zoltán; Timár, Gábor; Lichtenberger, János; Ferencz, Csaba

    2015-04-01

    Monitoring the state and health of the vegetation is essential to understand causes and severity of environmental change and to prepare for the negative effects of climate change on plant growth and productivity. Satellite remote sensing is the fundamental tool to monitor and study the changes of vegetation activity in general and to understand its relationship with the climate fluctuations. Vegetation indices and other vegetation related measures calculated from remotely sensed data are widely used to monitor and characterize the state of the terrestrial vegetation. Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI) are among the most popular indices that can be calculated from measurements of the MODerate resolution Imaging Spectroradiometer (MODIS) sensor onboard the NASA EOS-AM1/Terra and EOS-PM1/Aqua satellites (since 1999 and 2002 respectively). Based on the available, 15 years long MODIS data (2000-2014) the vegetation characteristics of Hungary was investigated in our research, primarily using vegetation indices. The MODIS NDVI and EVI (both part of the so-called MOD13 product of NASA) are freely available with a finest spatial resolution of 250 meters and a temporal resolution of 16 days since 2000/2002 (for Terra and Aqua respectively). The accuracy, the spatial resolution and temporal continuity of the MODIS products makes these datasets highly valuable despite of its relatively short temporal coverage. NDVI is also calculated routinely from the raw MODIS data collected by the receiving station of Eötvös Loránd University. In order to characterize vegetation activity and its variability within the Carpathian Basin the area-averaged annual cycles and their interannual variability were determined. The main aim was to find those years that can be considered as extreme according to specific indices. Using archive meteorological data the effects of extreme weather on vegetation activity and growth were investigated with emphasis on drought and heat waves. Te relationship between anomalies of vegetation characteristics and crop yield decrease in agricultural regions were characterised as well. The mean NDVI values of Hungary during the 15 years reveal the behaviour of the vegetation in the country, where the main land cover types (forest, agriculture and grassland) were distinguished as well. NDVI anomalies are analyzed separately for the main land cover types. Deviations from the potential maximum vegetation greenness are also calculated for the entire time period.

  15. Signal and background considerations for the MRSt on the National Ignition Facility (NIF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wink, C. W., E-mail: cwink@mit.edu; Frenje, J. A.; Gatu Johnson, M.

    2016-11-15

    A Magnetic Recoil Spectrometer (MRSt) has been conceptually designed for time-resolved measurements of the neutron spectrum at the National Ignition Facility. Using the MRSt, the goals are to measure the time-evolution of the spectrum with a time resolution of ∼20-ps and absolute accuracy better than 5%. To meet these goals, a detailed understanding and optimization of the signal and background characteristics are required. Through ion-optics, MCNP simulations, and detector-response calculations, it is demonstrated that the goals and a signal-to background >5–10 for the down-scattered neutron measurement are met if the background, consisting of ambient neutrons and gammas, at the MRStmore » is reduced 50–100 times.« less

  16. Signal and background considerations for the MRSt on the National Ignition Facility (NIF)

    DOE PAGES

    Wink, C. W.; Frenje, J. A.; Hilsabeck, T. J.; ...

    2016-08-03

    A Magnetic Recoil Spectrometer (MRSt) has been conceptually designed for time-resolved measurements of the neutron spectrum at the National Ignition Facility. Using the MRSt, the goals are to measure the time-evolution of the spectrum with a time resolution of ~20-ps and absolute accuracy better than 5%. To meet these goals, a detailed understanding and optimization of the signal and background characteristics are required. Through ion-optics, MCNP simulations, and detector-response calculations, we demonstrate that the goals and a signal-to background >5-10 for the down-scattered neutron measurement are met if the background, consisting of ambient neutrons and gammas, at the MRSt ismore » reduced 50-100 times.« less

  17. Simulation of Atmospheric Dispersion of Elevated Releases from Point Sources in Mississippi Gulf Coast with Different Meteorological Data

    PubMed Central

    Yerramilli, Anjaneyulu; Srinivas, Challa Venkata; Dasari, Hari Prasad; Tuluri, Francis; White, Loren D.; Baham, Julius M.; Young, John H.; Hughes, Robert; Patrick, Chuck; Hardy, Mark G.; Swanier, Shelton J.

    2009-01-01

    Atmospheric dispersion calculations are made using the HYSPLIT Particle Dispersion Model for studying the transport and dispersion of air-borne releases from point elevated sources in the Mississippi Gulf coastal region. Simulations are performed separately with three meteorological data sets having different spatial and temporal resolution for a typical summer period in 1–3 June 2006 representing a weak synoptic condition. The first two data are the NCEP global and regional analyses (FNL, EDAS) while the third is a meso-scale simulation generated using the Weather Research and Forecasting model with nested domains at a fine resolution of 4 km. The meso-scale model results show significant temporal and spatial variations in the meteorological fields as a result of the combined influences of the land-sea breeze circulation, the large scale flow field and diurnal alteration in the mixing depth across the coast. The model predicted SO2 concentrations showed that the trajectory and the concentration distribution varied in the three cases of input data. While calculations with FNL data show an overall higher correlation, there is a significant positive bias during daytime and negative bias during night time. Calculations with EDAS fields are significantly below the observations during both daytime and night time though plume behavior follows the coastal circulation. The diurnal plume behavior and its distribution are better simulated using the mesoscale WRF meteorological fields in the coastal environment suggesting its suitability for pollution dispersion impact assessment in the local scale. Results of different cases of simulation, comparison with observations, correlation and bias in each case are presented. PMID:19440433

  18. Estimation of Sea Ice Thickness Distributions through the Combination of Snow Depth and Satellite Laser Altimetry Data

    NASA Technical Reports Server (NTRS)

    Kurtz, Nathan T.; Markus, Thorsten; Cavalieri, Donald J.; Sparling, Lynn C.; Krabill, William B.; Gasiewski, Albin J.; Sonntag, John G.

    2009-01-01

    Combinations of sea ice freeboard and snow depth measurements from satellite data have the potential to provide a means to derive global sea ice thickness values. However, large differences in spatial coverage and resolution between the measurements lead to uncertainties when combining the data. High resolution airborne laser altimeter retrievals of snow-ice freeboard and passive microwave retrievals of snow depth taken in March 2006 provide insight into the spatial variability of these quantities as well as optimal methods for combining high resolution satellite altimeter measurements with low resolution snow depth data. The aircraft measurements show a relationship between freeboard and snow depth for thin ice allowing the development of a method for estimating sea ice thickness from satellite laser altimetry data at their full spatial resolution. This method is used to estimate snow and ice thicknesses for the Arctic basin through the combination of freeboard data from ICESat, snow depth data over first-year ice from AMSR-E, and snow depth over multiyear ice from climatological data. Due to the non-linear dependence of heat flux on ice thickness, the impact on heat flux calculations when maintaining the full resolution of the ICESat data for ice thickness estimates is explored for typical winter conditions. Calculations of the basin-wide mean heat flux and ice growth rate using snow and ice thickness values at the 70 m spatial resolution of ICESat are found to be approximately one-third higher than those calculated from 25 km mean ice thickness values.

  19. Stellar Laboratories II. New Zn Iv and Zn v Oscillator Strengths and Their Validation in the Hot White Dwarfs G191-B2B and RE0503-289

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Werner, K.; Quinet, P.; Kruk, J. W.

    2014-01-01

    Context. For the spectral analysis of high-resolution and high-signal-to-noise (SN) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. In a recent analysis of the ultraviolet (UV) spectrum of the DA-type white dwarf G191B2B,21 Zn iv lines were newly identified. Because of the lack of Zn iv data, transition probabilities of the isoelectronic Ge vi were adapted for a first, coarse determination of the photospheric Zn abundance.Aims. Reliable Zn iv and Zn v oscillator strengths are used to improve the Zn abundance determination and to identify more Zn lines in the spectra of G191B2B and the DO-type white dwarf RE 0503289. Methods. We performed new calculations of Zn iv and Zn v oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of the Zn iv v spectrum exhibited in high-resolution and high-SN UV observations of G191B2B and RE 0503289. Results. In the UV spectrum of G191B2B, we identify 31 Zn iv and 16 Zn v lines. Most of these are identified for the first time in any star. We can reproduce well almost all of them at log Zn 5.52 0.2 (mass fraction, about 1.7 times solar). In particular, the Zn iv Zn v ionization equilibrium, which is a very sensitive Teff indicator, is well reproduced with the previously determined Teff 60 000 2000 K and log g 7.60 0.05. In the spectrum of RE 0503289, we identified 128 Zn v lines for the first time and determined log Zn 3.57 0.2 (155 times solar). Conclusions. Reliable measurements and calculations of atomic data are a pre-requisite for stellar-atmosphere modeling. Observed Zn iv and Zn v line profiles in two white dwarf (G191B2B and RE 0503289) ultraviolet spectra were well reproduced with our newly calculated oscillator strengths. This allowed us to determine the photospheric Zn abundance of these two stars precisely.

  20. A High Resolution Liquid Xenon Imaging Telescope for 0.3-10 MeV Gamma Ray Astrophysics: Construction and Initial Balloon Flights

    NASA Technical Reports Server (NTRS)

    Aprile, Elena

    1993-01-01

    The results achieved with a 3.5 liter liquid xenon time projection chamber (LXe-TPC) prototype during the first year include: the efficiency of detecting the primary scintillation light for event triggering has been measured to be higher than 85%; the charge response has been measured to be stable to within 0.1% for a period of time of about 30 hours; the electron lifetime has been measured to be in excess of 1.3 ms; the energy resolution has been measured to be consistent with previous results obtained with small volume chambers; X-Y gamma ray imaging has been demonstrated with a nondestructive orthogonal wires readout; Monte Carlo simulation results on detection efficiency, expected background count rate at balloon altitude, background reduction algorithms, telescope response to point-like and diffuse sources, and polarization sensitivity calculations; and work on a 10 liter LXe-TPC prototype and gas purification/recovery system.

  1. Land Cover Monitoring for Water Resources Management in Angola

    NASA Astrophysics Data System (ADS)

    Miguel, Irina; Navarro, Ana; Rolim, Joao; Catalao, Joao; Silva, Joel; Painho, Marco; Vekerdy, Zoltan

    2016-08-01

    The aim of this paper is to assess the impact of improved temporal resolution and multi-source satellite data (SAR and optical) on land cover mapping and monitoring for efficient water resources management. For that purpose, we developed an integrated approach based on image classification and on NDVI and SAR backscattering (VV and VH) time series for land cover mapping and crop's irrigation requirements computation. We analysed 28 SPOT-5 Take-5 images with high temporal revisiting time (5 days), 9 Sentinel-1 dual polarization GRD images and in-situ data acquired during the crop growing season. Results show that the combination of images from different sources provides the best information to map agricultural areas. The increase of the images temporal resolution allows the improvement of the estimation of the crop parameters, and then, to calculate of the crop's irrigation requirements. However, this aspect was not fully exploited due to the lack of EO data for the complete growing season.

  2. Diffraction Efficiency of Thin Film Holographic Beam Steering Devices

    NASA Technical Reports Server (NTRS)

    Titus, Charles M.; Pouch, John; Nguyen, Hung; Miranda, Felix; Bos, Philip J.

    2003-01-01

    Dynamic holography has been demonstrated as a method for correcting aberrations in space deployable optics, and can also be used to achieve high-resolution beam steering in the same environment. In this paper, we consider some of the factors affecting the efficiency of these devices. Specifically, the effect on the efficiency of a highly collimated beam from the number of discrete phase steps per period is considered for a blazed thin film beam steering grating. The effect of the number of discrete phase steps per period on steering resolution is also considered. We also present some result of Finite-Difference Time-Domain (FDTD) calculations of light propagating through liquid crystal "blazed" gratings. Liquid crystal gratings are shown to spatially modulate both the phase and amplitude of the propagating light.

  3. Identifying Ca2+-Binding Sites in Proteins by Liquid Chromatography-Mass Spectrometry Using Ca2+-Directed Dissociations

    PubMed Central

    Jamalian, Azadeh; Sneekes, Evert-Jan; Wienk, Hans; Dekker, Lennard J. M.; Ruttink, Paul J. A.; Ursem, Mario; Luider, Theo M.; Burgers, Peter C.

    2014-01-01

    Here we describe a new method to identify calcium-binding sites in proteins using high-resolution liquid chromatography-mass spectrometry in concert with calcium-directed collision-induced dissociations. Our method does not require any modifications to the liquid chromatography-mass spectrometry apparatus, uses standard digestion protocols, and can be applied to existing high-resolution MS data files. In contrast to NMR, our method is applicable to very small amounts of complex protein mixtures (femtomole level). Calcium-bound peptides can be identified using three criteria: (1) the calculated exact mass of the calcium containing peptide; (2) specific dissociations of the calcium-containing peptide from threonine and serine residues; and (3) the very similar retention times of the calcium-containing peptide and the free peptide. PMID:25023127

  4. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography.

    PubMed

    Li, Qiong; Gluch, Jürgen; Krüger, Peter; Gall, Martin; Neinhuis, Christoph; Zschech, Ehrenfried

    2016-10-14

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have a direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A High Resolution Phase Shifting Interferometer.

    NASA Astrophysics Data System (ADS)

    Bayda, Michael; Bartscher, Christoph; Wilkinson, Allen

    1997-03-01

    Configuration, operation, and performance details of a high resolution phase shifting Twyman-Green interferometer are presented. The instrument was used for density relaxation experiments of very compressible liquid-vapor critical fluids.(A companion talk in the Nonequilibrium Phenomena session under Complex Fluids presents density equilibration work.) A sample assembly contained the cell, beam splitter, phase shifter, and mirrors inside a 6 cm diameter by 6 cm long aluminum cylinder. This sample assembly was contained inside a thermostat stable to 50 μK RMS deviation. A thin phase retarding Liquid Crystal Cell (LCC) was placed in the reference arm of the interferometer. The LCC provided four cumulative 90 degree phase shifts to produce four images used in computing each phase map. The Carré technique was used to calculate a phase value for each pixel from the four intensities of each pixel. Four images for one phase map could be acquired in less than two seconds. The spatial resolution was 25 μm. The phase resolution of the interferometer in a six second period was better than λ/400. The phase stability of the interferometer during 25 hours was better than λ/70. Factors affecting timing, resolution, and other phase shifting devices will be discussed. WWW Presentation

  6. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  7. Accurate simulation of backscattering spectra in the presence of sharp resonances

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Alves, E.; Jeynes, C.; Tosaki, M.

    2006-06-01

    In elastic backscattering spectrometry, the shape of the observed spectrum due to resonances in the nuclear scattering cross-section is influenced by many factors. If the energy spread of the beam before interaction is larger than the resonance width, then a simple convolution with the energy spread on exit and with the detection system resolution will lead to a calculated spectrum with a resonance much sharper than the observed signal. Also, the yield from a thin layer will not be calculated accurately. We have developed an algorithm for the accurate simulation of backscattering spectra in the presence of sharp resonances. Albeit approximate, the algorithm leads to dramatic improvements in the quality and accuracy of the simulations. It is simple to implement and leads to only small increases of the calculation time, being thus suitable for routine data analysis. We show different experimental examples, including samples with roughness and porosity.

  8. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo

    2015-10-14

    We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation inmore » metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.« less

  9. Stellar laboratories. VI. New Mo iv-vii oscillator strengths and the molybdenum abundance in the hot white dwarfs G191-B2B and RE 0503-289

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Quinet, P.; Hoyer, D.; Werner, K.; Demleitner, M.; Kruk, J. W.

    2016-03-01

    Context. For the spectral analysis of high-resolution and high signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Aims: To identify molybdenum lines in the ultraviolet (UV) spectra of the DA-type white dwarf G191-B2B and the DO-type white dwarf RE 0503-289 and, to determine their photospheric Mo abundances, reliable Mo iv-vii oscillator strengths are used. Methods: We newly calculated Mo iv-vii oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Mo lines exhibited in high-resolution and high S/N UV observations of RE 0503-289. Results: We identified 12 Mo v and 9 Mo vi lines in the UV spectrum of RE 0503-289 and measured a photospheric Mo abundance of 1.2-3.0 × 10-4 (mass fraction, 22 500-56 400 times the solar abundance). In addition, from the As v and Sn iv resonance lines, we measured mass fractions of arsenic (0.5-1.3 × 10-5, about 300-1200 times solar) and tin (1.3-3.2 × 10-4, about 14 300-35 200 times solar). For G191-B2B, upper limits were determined for the abundances of Mo (5.3 × 10-7, 100 times solar) and, in addition, for Kr (1.1 × 10-6, 10 times solar) and Xe (1.7 × 10-7, 10 times solar). The arsenic abundance was determined (2.3-5.9 × 10-7, about 21-53 times solar). A new, registered German Astrophysical Virtual Observatory (GAVO) service, TOSS, has been constructed to provide weighted oscillator strengths and transition probabilities. Conclusions: Reliable measurements and calculations of atomic data are a prerequisite for stellar-atmosphere modeling. Observed Mo v-vi line profiles in the UV spectrum of the white dwarf RE 0503-289 were well reproduced with our newly calculated oscillator strengths. For the first time, this allowed the photospheric Mo abundance in a white dwarf to be determined. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26666.Based on observations made with the NASA-CNES-CSA Far Ultraviolet Spectroscopic Explorer.Tables A.10-A.13 are only available via the German Astrophysical Virtual Observatory (GAVO) service TOSS (http://dc.g-vo.org/TOSS).

  10. Model-based review of Doppler global velocimetry techniques with laser frequency modulation

    NASA Astrophysics Data System (ADS)

    Fischer, Andreas

    2017-06-01

    Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution, when operating at the maximal frame rate of the camera. DGV without laser frequency modulation then provides the highest temporal resolutions and is not sensitive with respect to temporal variations but with respect to spatial variations of the scattered light intensity. In contrast to this, all DGV variants suffer from velocity variations during the measurement. In summary, the experimental conditions and the measurement task finally decide about the ideal choice from the reviewed DGV methods.

  11. SU-C-BRC-07: Parametrized GPU Accelerated Electron Monte Carlo Second Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: I am presenting a parameterized 3D GPU accelerated electron Monte Carlo second check program. Method: I wrote the 3D grid dose calculation algorithm in CUDA and utilized an NVIDIA GeForce GTX 780 Ti to run all of the calculations. The electron path beyond the distal end of the cone is governed by four parameters: the amplitude of scattering (AMP), the mean and width of a Gaussian energy distribution (E and α), and the percentage of photons. In my code, I adjusted all parameters until the calculated PDD and profile fit the measured 10×10 open beam data within 1%/1mm. Imore » then wrote a user interface for reading the DICOM treatment plan and images in Python. In order to verify the algorithm, I calculated 3D dose distributions on a variety of phantoms and geometries, and compared them with the Eclipse eMC calculations. I also calculated several patient specific dose distributions, including a nose and an ear. Finally, I compared my algorithm’s computation times to Eclipse’s. Results: The calculated MU for all of the investigated geometries agree with the TPS within the TG-114 action level of 5%. The MU for the nose was < 0.5 % different while the MU for the ear at 105 SSD was ∼2 %. Calculation times for a 12MeV 10×10 open beam ranged from 1 second for a 2.5 mm grid resolution with ∼15 million particles to 33 seconds on a 1 mm grid with ∼460 million particles. Eclipse calculation runtimes distributed over 10 FAS workers were 9 seconds to 15 minutes respectively. Conclusion: The GPU accelerated second check allows quick MU verification while accounting for patient specific geometry and heterogeneity.« less

  12. Medical imaging feasibility in body fluids using Markov chains

    NASA Astrophysics Data System (ADS)

    Kavehrad, M.; Armstrong, A. D.

    2017-02-01

    A relatively wide field-of-view and high resolution imaging is necessary for navigating the scope within the body, inspecting tissue, diagnosing disease, and guiding surgical interventions. As the large number of modes available in the multimode fibers (MMF) provides higher resolution, MMFs could replace the millimeters-thick bundles of fibers and lenses currently used in endoscopes. However, attributes of body fluids and obscurants such as blood, impose perennial limitations on resolution and reliability of optical imaging inside human body. To design and evaluate optimum imaging techniques that operate under realistic body fluids conditions, a good understanding of the channel (medium) behavior is necessary. In most prior works, Monte-Carlo Ray Tracing (MCRT) algorithm has been used to analyze the channel behavior. This task is quite numerically intensive. The focus of this paper is on investigating the possibility of simplifying this task by a direct extraction of state transition matrices associated with standard Markov modeling from the MCRT computer simulations programs. We show that by tracing a photon's trajectory in the body fluids via a Markov chain model, the angular distribution can be calculated by simple matrix multiplications. We also demonstrate that the new approach produces result that are close to those obtained by MCRT and other known methods. Furthermore, considering the fact that angular, spatial, and temporal distributions of energy are inter-related, mixing time of Monte- Carlo Markov Chain (MCMC) for different types of liquid concentrations is calculated based on Eigen-analysis of the state transition matrix and possibility of imaging in scattering media are investigated. To this end, we have started to characterize the body fluids that reduce the resolution of imaging [1].

  13. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  14. Theory and Development of Position-Sensitive Quantum Calorimeters. Degree awarded by Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali; White, Nicholas E. (Technical Monitor)

    2001-01-01

    Quantum calorimeters are being developed as imaging spectrometers for future X-ray astrophysics observatories. Much of the science to be done by these instruments could benefit greatly from larger focal-plane coverage of the detector (without increasing pixel size). An order of magnitude more area will greatly increase the science throughput of these future instruments. One of the main deterrents to achieving this goal is the complexity of the readout schemes involved. We have devised a way to increase the number of pixels from the current baseline designs by an order of magnitude without increasing the number of channels required for readout. The instrument is a high energy resolution, distributed-readout imaging spectrometer called a Position-Sensitive Transition-Edge Sensor (POST). A POST is a quantum calorimeter consisting of two Transition-Edge Sensors (TESS) on the ends of a long absorber capable of one-dimensional imaging spectroscopy. Comparing rise time and energy information from the two TESS, the position of the event in the POST is determined. The energy of the event is inferred from the sum of the two pulses. We have developed a generalized theoretical formalism for distributed-readout calorimeters and apply it to our devices. We derive the noise theory and calculate the theoretical energy resolution of a POST. Our calculations show that a 7-pixel POST with 6 keV saturation energy can achieve 2.3 eV resolution, making this a competitive design for future quantum calorimeter instruments. For this thesis we fabricated 7- and 15-pixel POSTS using Mo/Au TESs and gold absorbers, and moved from concept drawings on scraps of napkins to a 32 eV energy resolution at 1.5 keV, 7-pixel POST calorimeter.

  15. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  16. Assessment of proximal pulmonary arterial stiffness using magnetic resonance imaging: effects of technique, age and exercise

    PubMed Central

    Kamalasanan, Anu; Cassidy, Deidre B; Struthers, Allan D; Lipworth, Brian J; Houston, J Graeme

    2016-01-01

    Introduction To compare the reproducibility of pulmonary pulse wave velocity (PWV) techniques, and the effects of age and exercise on these. Methods 10 young healthy volunteers (YHV) and 20 older healthy volunteers (OHV) with no cardiac or lung condition were recruited. High temporal resolution phase contrast sequences were performed through the main pulmonary arteries (MPAs), right pulmonary arteries (RPAs) and left pulmonary arteries (LPAs), while high spatial resolution sequences were obtained through the MPA. YHV underwent 2 MRIs 6 months apart with the sequences repeated during exercise. OHV underwent an MRI scan with on-table repetition. PWV was calculated using the transit time (TT) and flow area techniques (QA). 3 methods for calculating QA PWV were compared. Results PWV did not differ between the two age groups (YHV 2.4±0.3/ms, OHV 2.9±0.2/ms, p=0.1). Using a high temporal resolution sequence through the RPA using the QA accounting for wave reflections yielded consistently better within-scan, interscan, intraobserver and interobserver reproducibility. Exercise did not result in a change in either TT PWV (mean (95% CI) of the differences: −0.42 (−1.2 to 0.4), p=0.24) or QA PWV (mean (95% CI) of the differences: 0.10 (−0.5 to 0.9), p=0.49) despite a significant rise in heart rate (65±2 to 87±3, p<0.0001), blood pressure (113/68 to 130/84, p<0.0001) and cardiac output (5.4±0.4 to 6.7±0.6 L/min, p=0.004). Conclusions QA PWV performed through the RPA using a high temporal resolution sequence accounting for wave reflections yields the most reproducible measurements of pulmonary PWV. PMID:27843548

  17. A PET detector prototype based on digital SiPMs and GAGG scintillators.

    PubMed

    Schneider, Florian R; Shimazoe, Kenji; Somlai-Schweiger, Ian; Ziegler, Sibylle I

    2015-02-21

    Silicon Photomultipliers (SiPM) are interesting light sensors for Positron Emission Tomography (PET). The detector signal of analog SiPMs is the total charge of all fired cells. Energy and time information have to be determined with dedicated readout electronics. Philips Digital Photon Counting has developed a SiPM with added electronics on cell level delivering a digital value of the time stamp and number of fired cells. These so called Digital Photon Counters (DPC) are fully digital devices. In this study, the feasibility of using DPCs in combination with LYSO (Lutetium Yttrium Oxyorthosilicate) and GAGG (Gadolinium Aluminum Gallium Garnet) scintillators for PET is tested. Each DPC module has 64 channels with 3.2 × 3.8775 mm(2), comprising 3200 cells each. GAGG is a recently developed scintillator (Zeff = 54, 6.63 g cm(-3), 520 nm peak emission, 46 000 photons MeV(-1), 88 ns (92%) and 230 ns (8%) decay times, non-hygroscopic, chemically and mechanically stable). Individual crystals of 2 × 2 × 6 mm(3) were coupled onto each DPC pixel. LYSO coupled to the DPC results in a coincidence time resolution (CTR) of 171 ps FWHM and an energy resolution of 12.6% FWHM at 511 keV. Using GAGG, coincidence timing is 310 ps FWHM and energy resolution is 8.5% FWHM. A PET detector prototype with 2 DPCs equipped with a GAGG array matching the pixel size (3.2 × 3.8775 × 8 mm(3)) was assembled. To emulate a ring of 10 modules, objects are rotated in the field of view. CTR of the PET is 619 ps and energy resolution is 9.2% FWHM. The iterative MLEM reconstruction is based on system matrices calculated with an analytical detector response function model. A phantom with rods of different diameters filled with (18)F was used for tomographic tests.

  18. A PET detector prototype based on digital SiPMs and GAGG scintillators

    NASA Astrophysics Data System (ADS)

    Schneider, Florian R.; Shimazoe, Kenji; Somlai-Schweiger, Ian; Ziegler, Sibylle I.

    2015-02-01

    Silicon Photomultipliers (SiPM) are interesting light sensors for Positron Emission Tomography (PET). The detector signal of analog SiPMs is the total charge of all fired cells. Energy and time information have to be determined with dedicated readout electronics. Philips Digital Photon Counting has developed a SiPM with added electronics on cell level delivering a digital value of the time stamp and number of fired cells. These so called Digital Photon Counters (DPC) are fully digital devices. In this study, the feasibility of using DPCs in combination with LYSO (Lutetium Yttrium Oxyorthosilicate) and GAGG (Gadolinium Aluminum Gallium Garnet) scintillators for PET is tested. Each DPC module has 64 channels with 3.2 × 3.8775 mm2, comprising 3200 cells each. GAGG is a recently developed scintillator (Zeff = 54, 6.63 g cm-3, 520 nm peak emission, 46 000 photons MeV-1, 88 ns (92%) and 230 ns (8%) decay times, non-hygroscopic, chemically and mechanically stable). Individual crystals of 2 × 2 × 6 mm3 were coupled onto each DPC pixel. LYSO coupled to the DPC results in a coincidence time resolution (CTR) of 171 ps FWHM and an energy resolution of 12.6% FWHM at 511 keV. Using GAGG, coincidence timing is 310 ps FWHM and energy resolution is 8.5% FWHM. A PET detector prototype with 2 DPCs equipped with a GAGG array matching the pixel size (3.2 × 3.8775 × 8 mm3) was assembled. To emulate a ring of 10 modules, objects are rotated in the field of view. CTR of the PET is 619 ps and energy resolution is 9.2% FWHM. The iterative MLEM reconstruction is based on system matrices calculated with an analytical detector response function model. A phantom with rods of different diameters filled with 18F was used for tomographic tests.

  19. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE.

    PubMed

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek

    2015-02-01

    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piper, M; Lundquist, J K

    Some recent investigations have begun to quantify turbulence and dissipation in frontal zones to address the question of what physical mechanism counteracts the intensification of temperature and velocity gradients across a developing front. Frank (1994) examines the turbulence structure of two fronts that passed a 200m instrumented tower near Karlsruhe, Germany. In addition to showing the mean vertical structure of the fronts as they pass the tower, Frank demonstrates that there is an order of magnitude or more increase in turbulent kinetic energy across the frontal zone. Blumen and Piper (1999) reported turbulence statistics, including dissipation rate measurements, from themore » MICROFRONTS field experiment, where high-frequency turbulence data were collected from tower-mounted hotwire and sonic anemometers in a cold front and in a density current. Chapman and Browning (2001) measured dissipation rate in a precipitating frontal zone with high-resolution Doppler radar. Their measurements were conducted above the surface layer, to heights of 5km. The dissipation rate values they found are comparable to those measured in Kennedy and Shapiro (1975) in an upper-level front. Here, we expand on these recent studies by depicting the behavior of the fine scales of turbulence near the surface in a frontal zone. The primary objective of this study is to quantify the levels of turbulence and dissipation occurring in a frontal zone through the calculation of kinetic energy spectra and dissipation rates. The high-resolution turbulence data used in this study are taken during the cold front that passed the MICROFRONTS site in the early evening hours of 20 March 1995. These new measurements can be used as a basis for parameterizing the effects of surface-layer turbulence in numerical models of frontogenesis. We present three techniques for calculating the dissipation rate: direct dissipation technique, inertial dissipation technique and Kolmogorov's four-fifths law. Dissipation rate calculations using these techniques are employed using data from both the sonic and hotwire anemometers, when possible. Unfortunately, direct calculations of {var_epsilon} were not possible during a part of the frontal passage because the high wind speeds concurrent with the frontal passage demand very high frequency resolution, beyond that possible with the hotwire anemometer, for direct {var_epsilon} calculations. The calculations resulting from these three techniques are presented for the cold front as a time series. Quantitative comparisons of the direct and indirect calculation techniques are also given. More detail, as well as a discussion of energy spectra, can be found in Piper & Lundquist(2004).« less

  1. A simple algorithm for sequentially incorporating gravity observations in seismic traveltime tomography

    USGS Publications Warehouse

    Parsons, T.; Blakely, R.J.; Brocher, T.M.

    2001-01-01

    The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.

  2. Disk hologram made from a computer-generated hologram.

    PubMed

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2009-12-01

    We have been investigating disk holograms made from a computer-generated hologram (CGH). Since a general flat format hologram has a limited viewable area, we usually cannot see the other side of the reconstructed object. Therefore, we propose a computer-generated cylindrical hologram (CGCH) to obtain a hologram with a 360 deg viewable area. The CGCH has a special shape that is difficult to construct and calculation of such a hologram takes too much time. In contrast, a disk-type hologram is well known as a 360 deg viewable hologram. Since a regular disk hologram is a flat reflective type, the reconstruction setup is easy. However, there are just a few reports about creating a disk hologram by use of a CGH. Because the output device lacks spatial resolution, the hologram cannot provide a large diffraction angle. In addition, the viewing zone depends on the hologram size; the maximum size of the fringe pattern is decided on the basis of the special frequency of the output device. The calculation amount of the proposed hologram is approximately a quarter of that of a CGCH. In a previous study, a disk hologram made from a CGH was achieved. However, since the relation between the vertical viewing zone and reconstructed image size is a trade-off, the size of the reconstructed image and view zone is not enough for practical use. To improve both parameters, we modified a fringe printer to issue a high-resolution fringe pattern for a disk hologram. In addition, we propose a new calculation method for fast calculation.

  3. Ground-based imaging spectrometry of canopy phenology and chemistry in a deciduous forest

    NASA Astrophysics Data System (ADS)

    Toomey, M. P.; Friedl, M. A.; Frolking, S. E.; Hilker, T.; O'Keefe, J.; Richardson, A. D.

    2013-12-01

    Phenology, annual life cycles of plants and animals, is a dynamic ecosystem attribute and an important feedback to climate change. Vegetation phenology is commonly monitored at canopy to continental scales using ground based digital repeat photography and satellite remote sensing, respectively. Existing systems which provide sufficient temporal resolution for phenological monitoring, however, lack the spectral resolution necessary to investigate the coupling of phenology with canopy chemistry (e.g. chlorophyll, nitrogen, lignin-cellulose content). Some researchers have used narrowband (<10 nm resolution) spectrometers at phenology monitoring sites, yielding new insights into seasonal changes in leaf biochemistry. Such instruments integrate the spectral characteristics of the entire canopy, however, masking considerable variability between species and plant functional types. There is an opportunity, then, for exploring the potential of imaging spectrometers to investigate the coupling of canopy phenology and the leaf biochemistry of individual trees. During the growing season of April-October 2013 we deployed an imaging spectrometer with a spectral range of 371-1042 nm and resolution of ~5 nm (Surface Optics Corporation 710; San Diego, CA) on a 35 m tall tower at the Harvard Forest, Massachusetts. The image resolution was ~0.25 megapixels and the field of view encompassed approximately 20 individual tree crowns at a distance of 20-40 m. The instrument was focused on a mixed hardwoods canopy composed of 4 deciduous tree species and one coniferous tree species. Scanning was performed daily with an acquisition frequency of 30 minutes during daylight hours. Derived imagery were used to calculate a suite of published spectral indices used to estimate foliar content of key pigments: cholorophyll, carotenoids and anthocyanins. Additionally, we calculated the photochemical reflectance index (PRI) as well as the position and slope of the red edge as indicators of mid- to late-summer plant stress. Changes in the spectral shape and indices throughout the growing season revealed coupling of leaf biochemistry and phenology, as visually observed in situ. Further, the spectrally rich imagery provided well calibrated reflectance data to simulate vegetation index time series of common spaceborne remote sensing platforms such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat. Comparisons between the simulated time series and in situ phenology observations yielded an enhanced interpretation of vegetation indices for determining phenological transition dates. This study demonstrates an advance in our ability to relate canopy phenology to leaf-level dynamics and demonstrates the role that ground-based imaging spectrometry can play in advancing spaceborne remote sensing of vegetation phenology.

  4. Effects of Stencil Width on Surface Ocean Geostrophic Velocity and Vorticity Estimation from Gridded Satellite Altimeter Data

    DTIC Science & Technology

    2012-03-17

    Texas at Austin, Austin, Texas, USA. g dq ’Departement de Physique and LPO, Universite de Bretagne V _ /" r5r’ Occidental, Brest ...grid points are used in the calculation, so that the grid spacing is 8 times larger than on the original grid. The 3-point stencil differences are sig...that the difference between narrow and wide stencil estimates increases over that found on the original higher resolution grid. Interpolation of the

  5. Identification of overlapping communities and their hierarchy by locally calculating community-changing resolution levels

    NASA Astrophysics Data System (ADS)

    Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen

    2011-01-01

    We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.

  6. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  7. Computation of Surface Laplacian for tri-polar ring electrodes on high-density realistic geometry head model.

    PubMed

    Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding

    2017-07-01

    Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.

  8. Nonlinear ultrasonic imaging with X wave

    NASA Astrophysics Data System (ADS)

    Du, Hongwei; Lu, Wei; Feng, Huanqing

    2009-10-01

    X wave has a large depth of field and may have important application in ultrasonic imaging to provide high frame rate (HFR). However, the HFR system suffers from lower spatial resolution. In this paper, a study of nonlinear imaging with X wave is presented to improve the resolution. A theoretical description of realizable nonlinear X wave is reported. The nonlinear field is simulated by solving the KZK nonlinear wave equation with a time-domain difference method. The results show that the second harmonic field of X wave has narrower mainlobe and lower sidelobes than the fundamental field. In order to evaluate the imaging effect with X wave, an imaging model involving numerical calculation of the KZK equation, Rayleigh-Sommerfeld integral, band-pass filtering and envelope detection is constructed to obtain 2D fundamental and second harmonic images of scatters in tissue-like medium. The results indicate that if X wave is used, the harmonic image has higher spatial resolution throughout the entire imaging region than the fundamental image, but higher sidelobes occur as compared to conventional focus imaging. A HFR imaging method with higher spatial resolution is thus feasible provided an apodization method is used to suppress sidelobes.

  9. Design, Fabrication and Characterization of A Bi-Frequency Co-Linear Array

    PubMed Central

    Wang, Zhuochen; Li, Sibo; Czernuszewicz, Tomasz J; Gallippi, Caterina M.; Liu, Ruibin; Geng, Xuecang

    2016-01-01

    Ultrasound imaging with high resolution and large penetration depth has been increasingly adopted in medical diagnosis, surgery guidance, and treatment assessment. Conventional ultrasound works at a particular frequency, with a −6 dB fractional bandwidth of ~70 %, limiting the imaging resolution or depth of field. In this paper, a bi-frequency co-linear array with resonant frequencies of 8 MHz and 20 MHz was investigated to meet the requirements of resolution and penetration depth for a broad range of ultrasound imaging applications. Specifically, a 32-element bi-frequency co-linear array was designed and fabricated, followed by element characterization and real-time sectorial scan (S-scan) phantom imaging using a Verasonics system. The bi-frequency co-linear array was tested in four different modes by switching between low and high frequencies on transmit and receive. The four modes included the following: (1) transmit low, receive low, (2) transmit low, receive high, (3) transmit high, receive low, (4) transmit high, receive high. After testing, the axial and lateral resolutions of all modes were calculated and compared. The results of this study suggest that bi-frequency co-linear arrays are potential aids for wideband fundamental imaging and harmonic/sub-harmonic imaging. PMID:26661069

  10. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    PubMed

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  11. Problems of sampling and radiation balances: Their problematics

    NASA Technical Reports Server (NTRS)

    Crommelynck, D.

    1980-01-01

    Problems associated with the measurement of the Earth radiation balances are addressed. It is demonstrated that the knowledge of the different radiation budgets with their components is largely dependent on the space time sampling of the radiation field of the Earth atmosphere system. Whichever instrumental approach is adopted (wide angle view of high resolution) it affects the space time integration of the fluxes measured directly or calculated. In this case the necessary knowledge of the reflection pattern depends in addition on the angular sampling of the radiances. A series of questions is considered, the answers of which are a prerequisite to the the organization of a global observation system.

  12. Swept optical SSB-SC modulation technique for high-resolution large-dynamic-range static strain measurement using FBG-FP sensors.

    PubMed

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2015-04-01

    This Letter presents a static strain demodulation technique for FBG-FP sensors using a suppressed carrier LiNbO(3) (LN) optical single sideband (SSB-SC) modulator. A narrow-linewidth tunable laser source is generated by driving the modulator using a linear chirp signal. Then this tunable single-frequency laser is used to interrogate the FBG-FP sensors with the Pound-Drever-Hall (PDH) technique, which is beneficial to eliminate the influence of light intensity fluctuation of the modulator at different tuning frequencies. The static strain is demodulated by calculating the wavelength difference of the PDH signals between the sensing FBG-FP sensor and the reference FBG-FP sensor. As an experimental result using the modulator, the linearity (R2) of the time-frequency response increases from 0.989 to 0.997, and the frequency-swept range (dynamic range) increases from hundreds of MHz to several GHz compared with commercial PZT-tunable lasers. The high-linearity time-wavelength relationship of the modulator is beneficial for improving the strain measurement resolution, as it can solve the problem of the frequency-swept nonlinearity effectively. In the laboratory test, a 0.67 nanostrain static strain resolution, with a 6 GHz dynamic range, is demonstrated.

  13. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  14. Aerosol chemical composition in cloud events by high resolution time-of-flight aerosol mass spectrometry.

    PubMed

    Hao, Liqing; Romakkaniemi, Sami; Kortelainen, Aki; Jaatinen, Antti; Portin, Harri; Miettinen, Pasi; Komppula, Mika; Leskinen, Ari; Virtanen, Annele; Smith, James N; Sueper, Donna; Worsnop, Douglas R; Lehtinen, Kari E J; Laaksonen, Ari

    2013-03-19

    This study presents results of direct observations of aerosol chemical composition in clouds. A high-resolution time-of-flight aerosol mass spectrometer was used to make measurements of cloud interstitial particles (INT) and mixed cloud interstitial and droplet residual particles (TOT). The differences between these two are the cloud droplet residuals (RES). Positive matrix factorization analysis of high-resolution mass spectral data sets and theoretical calculations were performed to yield distributions of chemical composition of the INT and RES particles. We observed that less oxidized hydrocarbon-like organic aerosols (HOA) were mainly distributed into the INT particles, whereas more oxidized low-volatile oxygenated OA (LVOOA) mainly in the RES particles. Nitrates existed as organic nitrate and in chemical form of NH(4)NO(3). Organic nitrates accounted for 45% of total nitrates in the INT particles, in clear contrast to 26% in the RES particles. Meanwhile, sulfates coexist in forms of acidic NH(4)HSO(4) and neutralized (NH(4))(2)SO(4). Acidic sulfate made up 64.8% of total sulfates in the INT particles, much higher than 10.7% in the RES particles. The results indicate a possible joint effect of activation ability of aerosol particles, cloud processing, and particle size effects on cloud formation.

  15. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well. The robustness of the algorithm with respect to the only free parameter it involves is ascertained. The method is then applied to perform tensile tests on gold nanopillars on strain rates as low as 100/s, bringing out the perils of high strain-rate molecular dynamics calculations. We also calculate temperature and stress dependence of activation free energy for surface nucleation of dislocations in pristine gold nanopillars under realistic loads. While maintaining fully atomistic resolution, we reach the fraction-of-a-second time scale regime. It is found that the activation free energy depends significantly and nonlinearly on the driving force (stress or strain) and temperature, leading to very high activation entropies for surface dislocation nucleation.

  16. TH-CD-201-09: High Spatial Resolution Absorbed Dose to Water Measurements Using Optical Calorimetry in Megavoltage External Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores-Martinez, E; DeWerd, L; Radtke, J

    2016-06-15

    Purpose: To develop and implement a high spatial resolution calorimeter methodology to measure absorbed dose to water (ADW) using phase shifts (PSs) of light passing through a water phantom and to compare measurements with theoretical calculations. Methods: Radiation-induced temperature changes were measured using the PSs of a He-Ne laser beam passing through a (10×10×10) cm{sup 3} water phantom. PSs were measured using a Michelson interferometer and recording the time-dependent fringe patterns on a CCD camera. The phantom was positioned at the center of the radiation field. A Varian 21EX was used to deliver 500 MU from a 9 MeV beammore » using a (6×6) cm{sup 2} cone. A 127cm SSD was used and the PSs were measured at depths ranging from of 1.90cm to 2.10cm in steps of 0.05cm by taking profiles at the corresponding rows across the image. PSs were computed by taking the difference between pre- and post-irradiation image frames and then measuring the amplitude of the resulting image profiles. An amplitude-to-PS calibration curve was generated using a piezoelectric transducer to mechanically induce PSs between 0.05 and 1.50 radians in steps of 0.05 radians. The temperature dependence of the refractive index of water at 632.8nm was used to convert PSs to ADW. Measured results were compared with ADW values calculated using the linac output calibration and commissioning data. Results: Milli-radian resolution in PS measurement was achieved using the described methodology. Measured radiation-induced PSs ranged from 0.10 ± 0.01 to 0.12 ± 0.01 radians at the investigated depths. After converting PSs to ADW, measured and calculated ADW values agreed within the measurement uncertainty. Conclusion: This work shows that interferometer-based calorimetry measurements are capable of achieving sub-millimeter resolution measuring 2D temperature/dose distributions, which are particularly useful for characterizing beams from modalities such as SRS, proton therapy, or microbeams.« less

  17. A high-resolution atlas of composite Sloan Digital Sky Survey galaxy spectra

    NASA Astrophysics Data System (ADS)

    Dobos, László; Csabai, István.; Yip, Ching-Wa; Budavári, Tamás.; Wild, Vivienne; Szalay, Alexander S.

    2012-02-01

    In this work we present an atlas of composite spectra of galaxies based on the data of the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). Galaxies are classified by colour, nuclear activity and star formation activity to calculate average spectra of high signal-to-noise ratio (S/N) and resolution (? at Δλ= 1 Å), using an algorithm that is robust against outliers. Besides composite spectra, we also compute the first five principal components of the distributions in each galaxy class to characterize the nature of variations of individual spectra around the averages. The continua of the composite spectra are fitted with BC03 stellar population synthesis models to extend the wavelength coverage beyond the coverage of the SDSS spectrographs. Common derived parameters of the composites are also calculated: integrated colours in the most popular filter systems, line-strength measurements and continuum absorption indices (including Lick indices). These derived parameters are compared with the distributions of parameters of individual galaxies, and it is shown on many examples that the composites of the atlas cover much of the parameter space spanned by SDSS galaxies. By co-adding thousands of spectra, a total integration time of several months can be reached, which results in extremely low noise composites. The variations in redshift not only allow for extending the spectral coverage bluewards to the original wavelength limit of the SDSS spectrographs, but also make higher spectral resolution achievable. The composite spectrum atlas is available online at .

  18. Pressure spectra from single-snapshot tomographic PIV

    NASA Astrophysics Data System (ADS)

    Schneiders, Jan F. G.; Avallone, Francesco; Pröbsting, Stefan; Ragni, Daniele; Scarano, Fulvio

    2018-03-01

    The power spectral density and coherence of temporal pressure fluctuations are obtained from low-repetition-rate tomographic PIV measurements. This is achieved by extension of recent single-snapshot pressure evaluation techniques based upon the Taylor's hypothesis (TH) of frozen turbulence and vortex-in-cell (VIC) simulation. Finite time marching of the measured instantaneous velocity fields is performed using TH and VIC. Pressure is calculated from the resulting velocity time series. Because of the theoretical limitations, the finite time marching can be performed until the measured flow structures are convected out of the measurement volume. This provides a lower limit of resolvable frequency range. An upper limit is given by the spatial resolution of the measurements. Finite time-marching approaches are applied to low-repetition-rate tomographic PIV data of the flow past a straight trailing edge at 10 m/s. Reference results of the power spectral density and coherence are obtained from surface pressure transducers. In addition, the results are compared to state-of-the-art experimental data obtained from time-resolved tomographic PIV performed at 10 kHz. The time-resolved approach suffers from low spatial resolution and limited maximum acquisition frequency because of hardware limitations. Additionally, these approaches strongly depend upon the time kernel length chosen for pressure evaluation. On the other hand, the finite time-marching approaches make use of low-repetition-rate tomographic PIV measurements that offer higher spatial resolution. Consequently, increased accuracy of the power spectral density and coherence of pressure fluctuations are obtained in the high-frequency range, in comparison to the time-resolved measurements. The approaches based on TH and VIC are found to perform similarly in the high-frequency range. At lower frequencies, TH is found to underestimate coherence and intensity of the pressure fluctuations in comparison to time-resolved PIV and the microphone reference data. The VIC-based approach, on the other hand, returns results on the order of the reference.

  19. Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale.

    PubMed

    Mauree, Dasaraden; Coccolo, Silvia; Kaempf, Jérôme; Scartezzini, Jean-Louis

    2017-01-01

    A new methodology is proposed to couple a meteorological model with a building energy use model. The aim of such a coupling is to improve the boundary conditions of both models with no significant increase in computational time. In the present case, the Canopy Interface Model (CIM) is coupled with CitySim. CitySim provides the geometrical characteristics to CIM, which then calculates a high resolution profile of the meteorological variables. These are in turn used by CitySim to calculate the energy flows in an urban district. We have conducted a series of experiments on the EPFL campus in Lausanne, Switzerland, to show the effectiveness of the coupling strategy. First, measured data from the campus for the year 2015 are used to force CIM and to evaluate its aptitude to reproduce high resolution vertical profiles. Second, we compare the use of local climatic data and data from a meteorological station located outside the urban area, in an evaluation of energy use. In both experiments, we demonstrate the importance of using in building energy software, meteorological variables that account for the urban microclimate. Furthermore, we also show that some building and urban forms are more sensitive to the local environment.

  20. Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale

    PubMed Central

    Coccolo, Silvia; Kaempf, Jérôme; Scartezzini, Jean-Louis

    2017-01-01

    A new methodology is proposed to couple a meteorological model with a building energy use model. The aim of such a coupling is to improve the boundary conditions of both models with no significant increase in computational time. In the present case, the Canopy Interface Model (CIM) is coupled with CitySim. CitySim provides the geometrical characteristics to CIM, which then calculates a high resolution profile of the meteorological variables. These are in turn used by CitySim to calculate the energy flows in an urban district. We have conducted a series of experiments on the EPFL campus in Lausanne, Switzerland, to show the effectiveness of the coupling strategy. First, measured data from the campus for the year 2015 are used to force CIM and to evaluate its aptitude to reproduce high resolution vertical profiles. Second, we compare the use of local climatic data and data from a meteorological station located outside the urban area, in an evaluation of energy use. In both experiments, we demonstrate the importance of using in building energy software, meteorological variables that account for the urban microclimate. Furthermore, we also show that some building and urban forms are more sensitive to the local environment. PMID:28880883

  1. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data set, enabling the appropriate use of those images in a number of applications.

  2. Development of digital reconstructed radiography software at new treatment facility for carbon-ion beam scanning of National Institute of Radiological Sciences.

    PubMed

    Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji

    2012-06-01

    To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.

  3. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  4. [Uncertainty evaluation of the determination of toxic equivalent quantity of polychlorinated dibenzo-p-dioxins and dibenzofurans in soil by isotope dilution high resolution gas chromatography and high resolution mass spectrometry].

    PubMed

    Du, Bing; Liu Aimin; Huang, Yeru

    2014-09-01

    Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in soil samples were analyzed by isotope dilution method with high resolution gas chromatography and high resolution mass spectrometry (ID-HRGC/HRMS), and the toxic equivalent quantity (TEQ) were calculated. The impacts of major source of measurement uncertainty are discussed, and the combined relative standard uncertainties were calculated for each 2, 3, 7, 8 substituted con- gener. Furthermore, the concentration, combined uncertainty and expanded uncertainty for TEQ of PCDD/Fs in a soil sample in I-TEF, WHO-1998-TEF and WHO-2005-TEF schemes are provided as an example. I-TEF, WHO-1998-TEF and WHO-2005-TEF are the evaluation schemes of toxic equivalent factor (TEF), and are all currently used to describe 2,3,7,8 sub- stituted relative potencies.

  5. Microsecond time-scale kinetics of transient biochemical reactions

    PubMed Central

    Mitić, Sandra; Strampraad, Marc J. F.; de Vries, Simon

    2017-01-01

    To afford mechanistic studies in enzyme kinetics and protein folding in the microsecond time domain we have developed a continuous-flow microsecond time-scale mixing instrument with an unprecedented dead-time of 3.8 ± 0.3 μs. The instrument employs a micro-mixer with a mixing time of 2.7 μs integrated with a 30 mm long flow-cell of 109 μm optical path length constructed from two parallel sheets of silver foil; it produces ultraviolet-visible spectra that are linear in absorbance up to 3.5 with a spectral resolution of 0.4 nm. Each spectrum corresponds to a different reaction time determined by the distance from the mixer outlet, and by the fluid flow rate. The reaction progress is monitored in steps of 0.35 μs for a total duration of ~600 μs. As a proof of principle the instrument was used to study spontaneous protein refolding of pH-denatured cytochrome c. Three folding intermediates were determined: after a novel, extremely rapid initial phase with τ = 4.7 μs, presumably reflecting histidine re-binding to the iron, refolding proceeds with time constants of 83 μs and 345 μs to a coordinatively saturated low-spin iron form in quasi steady state. The time-resolution specifications of our spectrometer for the first time open up the general possibility for comparison of real data and molecular dynamics calculations of biomacromolecules on overlapping time scales. PMID:28973014

  6. Relativistic algorithm for time transfer in Mars missions under IAU Resolutions: an analytic approach

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Yang; Xie, Yi

    2015-02-01

    With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model.

  7. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  8. Calculation of characteristics of compressed gaseous xenon gamma-ray detectors

    NASA Astrophysics Data System (ADS)

    Komarov, V. B.; Dmitrenko, V. V.; Ulin, S. E.; Uteshev, Z. M.

    1992-12-01

    Energy resolution and pulse distribution of a compressed gaseous xenon cylindrical detector were calculated. The analytical calculation took into account gamma-ray energy, fluctuation of electron-ion pairs, electron distribution, recombination, and H excess. The calculation was performed for a xenon density less than 0.6 g/cm and H excess less than 2 percent.

  9. Rupture models with dynamically determined breakdown displacement

    USGS Publications Warehouse

    Andrews, D.J.

    2004-01-01

    The critical breakdown displacement, Dc, in which friction drops to its sliding value, can be made dependent on event size by specifying friction to be a function of variables other than slip. Two such friction laws are examined here. The first is designed to achieve accuracy and smoothness in discrete numerical calculations. Consistent resolution throughout an evolving rupture is achieved by specifying friction as a function of elapsed time after peak stress is reached. Such a time-weakening model produces Dc and fracture energy proportional to the square root of distance rupture has propagated in the case of uniform stress drop. The second friction law is more physically motivated. Energy loss in a damage zone outside the slip zone has the effect of increasing Dc and limiting peak slip velocity (Andrews, 1976). This article demonstrates a converse effect, that artificially limiting slip velocity on a fault in an elastic medium has a toughening effect, increasing fracture energy and Dc proportionally to rupture propagation distance in the case of uniform stress drop. Both the time-weakening and the velocity-toughening models can be used in calculations with heterogeneous stress drop.

  10. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  11. A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Cerjan, C. J.; Marinak, M. M.

    A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less

  12. High resolution in situ ultrasonic corrosion monitor

    DOEpatents

    Grossman, R.J.

    1984-01-10

    An ultrasonic corrosion monitor is provided which produces an in situ measurement of the amount of corrosion of a monitoring zone or zones of an elongate probe placed in the corrosive environment. A monitoring zone is preferably formed between the end of the probe and the junction of the zone with a lead-in portion of the probe. Ultrasonic pulses are applied to the probe and a determination made of the time interval between pulses reflected from the end of the probe and the junction referred to, both when the probe is uncorroded and while it is corroding. Corresponding electrical signals are produced and a value for the normalized transit time delay derived from these time interval measurements is used to calculate the amount of corrosion.

  13. High resolution in situ ultrasonic corrosion monitor

    DOEpatents

    Grossman, Robert J.

    1985-01-01

    An ultrasonic corrosion monitor is provided which produces an in situ measurement of the amount of corrosion of a monitoring zone or zones of an elongate probe placed in the corrosive environment. A monitoring zone is preferably formed between the end of the probe and the junction of the zone with a lead-in portion of the probe. Ultrasonic pulses are applied to the probe and a determination made of the time interval between pulses reflected from the end of the probe and the junction referred to, both when the probe is uncorroded and while it is corroding. Corresponding electrical signals are produced and a value for the normalized transit time delay derived from these time interval measurements is used to calculate the amount of corrosion.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elvidge, Christopher D.; Sutton, Paul S.; Ghosh, Tilottama

    A global poverty map has been produced at 30 arc sec resolution using a poverty index calculated by dividing population count (LandScan2004) by the brightness of satellite observed lighting (DMSP nighttimelights). Inputs to the LandScan product include satellite-derived landcover and topography, plus human settlement outlines derived from high-resolution imagery. The poverty estimates have been calibrated using national level poverty data from the World Development Indicators (WDI) 2006 edition. The total estimate of the numbers of individuals living in poverty is 2.2billion, slightly under the WDI estimate of 2.6 billion. We have demonstrated a new class of poverty map that shouldmore » improve over time through the inclusion of new reference data for calibration of poverty estimates and as improvements are made in the satellite observation of human activities related to economic activity and technology access.« less

  15. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  16. Delineation of Rupture Propagation of Large Earthquakes Using Source-Scanning Algorithm: A Control Study

    NASA Astrophysics Data System (ADS)

    Kao, H.; Shan, S.

    2004-12-01

    Determination of the rupture propagation of large earthquakes is important and of wide interest to the seismological research community. The conventional inversion method determines the distribution of slip at a grid of subfaults whose orientations are predefined. As a result, difference choices of fault geometry and dimensions often result in different solutions. In this study, we try to reconstruct the rupture history of an earthquake using the newly developed Source-Scanning Algorithm (SSA) without imposing any a priori constraints on the fault's orientation and dimension. The SSA identifies the distribution of seismic sources in two steps. First, it calculates the theoretical arrival times from all grid points inside the model space to all seismic stations by assuming an origin time. Then, the absolute amplitudes of the observed waveforms at the predicted arrival times are added to give the "brightness" of each time-space pair, and the brightest spots mark the locations of sources. The propagation of the rupture is depicted by the migration of the brightest spots throughout a prescribed time window. A series of experiments are conducted to test the resolution of the SSA inversion. Contrary to the conventional wisdom that seismometers should be placed as close as possible to the fault trace to give the best resolution in delineating rupture details, we found that the best results are obtained if the seismograms are recorded at a distance about half of the total rupture length away from the fault trace. This is especially true when the rupture duration is longer than ~10 s. A possible explanation is that the geometric spreading effects for waveforms from different segments of the rupture are about the same if the stations are sufficiently away from the fault trace, thus giving a uniform resolution to the entire rupture history.

  17. Evaluation of the Actuator Line Model with coarse resolutions

    NASA Astrophysics Data System (ADS)

    Draper, M.; Usera, G.

    2015-06-01

    The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.

  18. ERP-Variations on Time Scales Between Hours and Months Derived From GNSS Observations

    NASA Astrophysics Data System (ADS)

    Weber, R.; Englich, S.; Mendes Cerveira, P.

    2007-05-01

    Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Rotation Parameters (ERPs - polar motion, UT1/LOD) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the ERP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the year 2005. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 113 fairly stable stations out of the IGS05 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed.

  19. Horizontal Residual Mean Circulation: Evaluation of Spatial Correlations in Coarse Resolution Ocean Models

    NASA Astrophysics Data System (ADS)

    Li, Y.; McDougall, T. J.

    2016-02-01

    Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.

  20. Two-dimensional directional synthetic aperture focusing technique using acoustic-resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Jeon, Seungwan; Park, Jihoon; Kim, Chulhong

    2018-02-01

    Photoacoustic microscopy (PAM) is a hybrid imaging technology using optical illumination and acoustic detection. PAM is divided into two types: optical-resolution PAM (OR-PAM) and acoustic-resolution photoacoustic microscopy (AR-PAM). Among them, AR-PAM has a great advantage in the penetration depth compared to OR-PAM because ARPAM relies on the acoustic focus, which is much less scattered in biological tissue than optical focus. However, because the acoustic focus is not as tight as the optical focus with a same numerical aperture (NA), the AR-PAM requires acoustic NA higher than optical NA. The high NA of the acoustic focus produces good image quality in the focal zone, but significantly degrades spatial resolution and signal-to-noise ratio (SNR) in the out-of-focal zone. To overcome the problem, synthetic aperture focusing technique (SAFT) has been introduced. SAFT improves the degraded image quality in terms of both SNR and spatial resolution in the out-of-focus zone by calculating the time delay of the corresponding signals and combining them. To extend the dimension of correction effect, several 2D SAFTs have been introduced, but there was a problem that the conventional 2D SAFTs cannot improve the degraded SNR and resolution as 1D SAFT can do. In this study, we proposed a new 2D SAFT that can compensate the distorted signals in x and y directions while maintaining the correction performance as the 1D SAFT.

  1. Dictionary learning based noisy image super-resolution via distance penalty weight model

    PubMed Central

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633

  2. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  3. Continued Development of a Global Heat Transfer Measurement System at AEDC Hypervelocity Wind Tunnel 9

    NASA Technical Reports Server (NTRS)

    Kurits, Inna; Lewis, M. J.; Hamner, M. P.; Norris, Joseph D.

    2007-01-01

    Heat transfer rates are an extremely important consideration in the design of hypersonic vehicles such as atmospheric reentry vehicles. This paper describes the development of a data reduction methodology to evaluate global heat transfer rates using surface temperature-time histories measured with the temperature sensitive paint (TSP) system at AEDC Hypervelocity Wind Tunnel 9. As a part of this development effort, a scale model of the NASA Crew Exploration Vehicle (CEV) was painted with TSP and multiple sequences of high resolution images were acquired during a five run test program. Heat transfer calculation from TSP data in Tunnel 9 is challenging due to relatively long run times, high Reynolds number environment and the desire to utilize typical stainless steel wind tunnel models used for force and moment testing. An approach to reduce TSP data into convective heat flux was developed, taking into consideration the conditions listed above. Surface temperatures from high quality quantitative global temperature maps acquired with the TSP system were then used as an input into the algorithm. Preliminary comparison of the heat flux calculated using the TSP surface temperature data with the value calculated using the standard thermocouple data is reported.

  4. Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector

    NASA Astrophysics Data System (ADS)

    Eleon, C.; Perot, B.; Carasco, C.

    2010-07-01

    The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.

  5. An investigation of the impact of variations of DVH calculation algorithms on DVH dependant radiation therapy plan evaluation metrics

    NASA Astrophysics Data System (ADS)

    Kennedy, A. M.; Lane, J.; Ebert, M. A.

    2014-03-01

    Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.

  6. Chemical Shifts of the Carbohydrate Binding Domain of Galectin-3 from Magic Angle Spinning NMR and Hybrid Quantum Mechanics/Molecular Mechanics Calculations.

    PubMed

    Kraus, Jodi; Gupta, Rupal; Yehl, Jenna; Lu, Manman; Case, David A; Gronenborn, Angela M; Akke, Mikael; Polenova, Tatyana

    2018-03-22

    Magic angle spinning NMR spectroscopy is uniquely suited to probe the structure and dynamics of insoluble proteins and protein assemblies at atomic resolution, with NMR chemical shifts containing rich information about biomolecular structure. Access to this information, however, is problematic, since accurate quantum mechanical calculation of chemical shifts in proteins remains challenging, particularly for 15 N H . Here we report on isotropic chemical shift predictions for the carbohydrate recognition domain of microcrystalline galectin-3, obtained from using hybrid quantum mechanics/molecular mechanics (QM/MM) calculations, implemented using an automated fragmentation approach, and using very high resolution (0.86 Å lactose-bound and 1.25 Å apo form) X-ray crystal structures. The resolution of the X-ray crystal structure used as an input into the AF-NMR program did not affect the accuracy of the chemical shift calculations to any significant extent. Excellent agreement between experimental and computed shifts is obtained for 13 C α , while larger scatter is observed for 15 N H chemical shifts, which are influenced to a greater extent by electrostatic interactions, hydrogen bonding, and solvation.

  7. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands

    USGS Publications Warehouse

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela L.; Jarchow, Christopher J.; Roberts, Dar A.

    2017-01-01

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24 m resolution WorldView-3 and a 0.1 m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure.

  8. Achieving atomic resolution magnetic dichroism by controlling the phase symmetry of an electron probe

    DOE PAGES

    Rusz, Jan; Idrobo, Juan -Carlos; Bhowmick, Somnath

    2014-09-30

    The calculations presented here reveal that an electron probe carrying orbital angular momentum is just a particular case of a wider class of electron beams that can be used to measure electron magnetic circular dichroism (EMCD) with atomic resolution. It is possible to obtain an EMCD signal with atomic resolution by simply breaking the symmetry of the electron probe phase front using the aberration-corrected optics of a scanning transmission electron microscope. The probe’s required phase distribution depends on the sample’s magnetic symmetry and crystal structure. The calculations indicate that EMCD signals that use the electron probe’s phase are as strongmore » as those obtained by nanodiffraction methods.« less

  9. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less

  10. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  11. Excitonic Energy Landscape of the Y16F Mutant of the Chlorobium tepidum Fenna-Matthews-Olson (FMO) Complex: High Resolution Spectroscopic and Modeling Studies.

    PubMed

    Khmelnitskiy, Anton; Saer, Rafael G; Blankenship, Robert E; Jankowiak, Ryszard

    2018-04-12

    We report high-resolution (low-temperature) absorption, emission, and nonresonant/resonant hole-burned (HB) spectra and results of excitonic calculations using a non-Markovian reduced density matrix theory (with an improved algorithm for parameter optimization in heterogeneous samples) obtained for the Y16F mutant of the Fenna-Matthews-Olson (FMO) trimer from the green sulfur bacterium Chlorobium tepidum. We show that the Y16F mutant is a mixture of FMO complexes with three independent low-energy traps (located near 817, 821, and 826 nm), in agreement with measured composite emission and HB spectra. Two of these traps belong to mutated FMO subpopulations characterized by significantly modified low-energy excitonic states. Hamiltonians for the two major subpopulations (Sub 821 and Sub 817 ) provide new insight into extensive changes induced by the single-point mutation in the vicinity of BChl 3 (where tyrosine Y16 was replaced with phenylalanine F16). The average decay time(s) from the higher exciton state(s) in the Y16F mutant depends on frequency and occurs on a picosecond time scale.

  12. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies.

    PubMed

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S; Moses, William W; Qi, Jinyi

    2018-03-16

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1-1.3 over the TOF 500 ps and 1.5-1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  13. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S.; Moses, William W.; Qi, Jinyi

    2018-03-01

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1–1.3 over the TOF 500 ps and 1.5–1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  14. Accurate heterogeneous dose calculation for lung cancer patients without high‐resolution CT densities

    PubMed Central

    Li, Jonathan G.; Liu, Chihray; Olivier, Kenneth R.; Dempsey, James F.

    2009-01-01

    The aim of this study was to investigate the relative accuracy of megavoltage photon‐beam dose calculations employing either five bulk densities or independent voxel densities determined by calibration of the CT Houndsfield number. Full‐resolution CT and bulk density treatment plans were generated for 70 lung or esophageal cancer tumors (66 cases) using a commercial treatment planning system with an adaptive convolution dose calculation algorithm (Pinnacle3, Philips Medicals Systems). Bulk densities were applied to segmented regions. Individual and population average densities were compared to the full‐resolution plan for each case. Monitor units were kept constant and no normalizations were employed. Dose volume histograms (DVH) and dose difference distributions were examined for all cases. The average densities of the segmented air, lung, fat, soft tissue, and bone for the entire set were found to be 0.14, 0.26, 0.89, 1.02, and 1.12 g/cm3, respectively. In all cases, the normal tissue DVH agreed to better than 2% in dose. In 62 of 70 DVHs of the planning target volume (PTV), agreement to better than 3% in dose was observed. Six cases demonstrated emphysema, one with bullous formations and one with a hiatus hernia having a large volume of gas. These required the additional assignment of density to the emphysemic lung and inflammatory changes to the lung, the regions of collapsed lung, the bullous formations, and the hernia gas. Bulk tissue density dose calculation provides an accurate method of heterogeneous dose calculation. However, patients with advanced emphysema may require high‐resolution CT studies for accurate treatment planning. PACS number: 87.53.Tf

  15. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  16. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  17. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  18. Valence and lowest Rydberg electronic states of phenol investigated by synchrotron radiation and theoretical methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Limão-Vieira, P., E-mail: plimaovieira@fct.unl.pt; Ferreira da Silva, F.; Lange, E.

    2016-07-21

    We present the experimental high-resolution vacuum ultraviolet (VUV) photoabsorption spectra of phenol covering for the first time the full 4.3–10.8 eV energy-range, with absolute cross sections determined. Theoretical calculations on the vertical excitation energies and oscillator strengths were performed using time-dependent density functional theory and the equation-of-motion coupled cluster method restricted to single and double excitations level. These have been used in the assignment of valence and Rydberg transitions of the phenol molecule. The VUV spectrum reveals several new features not previously reported in the literature, with particular reference to the 6.401 eV transition, which is here assigned to themore » 3sσ/σ{sup ∗}(OH)←3π(3a″) transition. The measured absolute photoabsorption cross sections have been used to calculate the photolysis lifetime of phenol in the earth’s atmosphere (0–50 km).« less

  19. Mass spectrometer measurements of test gas composition in a shock tunnel

    NASA Technical Reports Server (NTRS)

    Skinner, K. A.; Stalker, R. J.

    1995-01-01

    Shock tunnels afford a means of generating hypersonic flow at high stagnation enthalpies, but they have the disadvantage that thermochemical effects make the composition of the test flow different to that of ambient air. The composition can be predicted by numerical calculations of the nozzle flow expansion, using simplified thermochemical models and, in the absence of experimental measurements, it has been necessary to accept the results given by these calculations. This note reports measurements of test gas composition, at stagnation enthalpies up to 12.5 MJ.kg(exp -1), taken with a time-of-flight mass spectrometer. Limited results have been obtained in previous measurements. These were taken at higher stagnation enthalpies, and used a quadruple mass spectrometer. The time-of-flight method was preferred here because it enabled a number of complete mass spectra to be obtained in each test, and because it gives good mass resolution over the range of interest with air (up to 50 a.m.a.).

  20. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  1. Sediment Flux from Source to Sink in the Brazos-Trinity Depositional System

    NASA Astrophysics Data System (ADS)

    Pirmez, C.; Prather, B. E.; Droxler, A.; Ohayer, W.

    2007-12-01

    During the Late Pleistocene a series of intra-slope basins offshore Texas in the Western Gulf of Mexico, received a high influx of clastic sediments derived primarily from the Brazos, Trinity, and Mississippi rivers. Sediment failures initiated at shelf edge deltas resulted in mass flows that negotiate a complex slope and basin topography caused by salt tectonics. Sediment locally fill ponded basins eventually spilling into subsequent basins downstream. Interaction between these flows and slope topography leads to a complex partitioning of sediment over time and space that can only be unraveled with high-resolution data. The availability of system-wide coverage with conventional 3d seismic surveys, a dense grid of high-resolution 2d seismic lines and cored wells from two of the four linked intraslope basins, makes this locale an ideal area to investigate the transfer of sediment across the continental margin, from river sources to the ultimate sink within an enclosed intraslope basin. Data from IODP Expedition 308 and industry wells, combined with data from previous studies on the shelf constrain an integrated seismic stratigraphic framework for the depositional system. Numerous radiocarbon age dates coupled with multiple stratigraphic tools (seismic-, bio-, and tephra correlations and oxygen isotope measurements) provide an unprecedented high-resolution chronology that allow for detailed estimation of sedimentation rates in this turbidite system and calculation of sediment volumes in each of the basins over time intervals of a few millennia during the late Pleistocene. We find that rates of sedimentation exceed 10 m/kyr during some periods of ultra-fast turbidite accumulation. Rates of channel incision and tectonic subsidence can also be calculated and are comparable to the rapid accumulation rates measured in the basin fill. Our observations indicate that while sealevel changes exert a first order control on delivery of sediment to the basins, the sedimentary record suggests that delta dynamics, basin tectonics and the interaction between gravity flows and basin topography are equally important in determining the distribution of sediments in time and space along this depositional system.

  2. Performance evaluation and optimization of the MiniPET-II scanner

    NASA Astrophysics Data System (ADS)

    Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo

    2013-04-01

    This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.

  3. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  4. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less

  5. The Application of MRI for Depiction of Subtle Blood Brain Barrier Disruption in Stroke

    PubMed Central

    Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael

    2011-01-01

    The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI. PMID:21209786

  6. The application of MRI for depiction of subtle blood brain barrier disruption in stroke.

    PubMed

    Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael

    2010-12-26

    The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI.

  7. Actinometric measurements and theoretical calculations of j/O3/, the rate of photolysis of ozone to O/1D/

    NASA Technical Reports Server (NTRS)

    Dickerson, R. R.; Stedman, D. H.; Chameides, W. L.; Crutzen, P. J.; Fishman, J.

    1979-01-01

    The paper presents an experimental technique which measures j/O3-O(1-D)/, the rate of solar photolysis of ozone to singlet oxygen atoms. It is shown that a flow actinometer carries dilute O3 in N2O into direct sunlight where the O(1D) formed reacts with N2O to form NO which chemiluminescence detects, with a time resolution of about one minute. Measurements indicate a photolysis rate of 1.2 (+ or - .2) x 10 to the -5/s for a cloudless sky, 45 deg zenith angle, 0.345 cm ozone column and zero albedo. Finally, ground level results compare with theoretical calculations based on the UV actinic flux as a function of ozone column and solar zenith angle.

  8. Retrieving plasmonic field information from metallic nanospheres using attosecond photoelectron streaking spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Jianxiong; Saydanzad, Erfan; Thumm, Uwe

    2017-04-01

    Streaked photoemission by attosecond extreme ultraviolet (XUV) pulses into an infrared (IR) or visible streaking pulse, holds promise for imaging with sub-fs time resolution the dielectric plasmonic response of metallic nanoparticles to the IR or visible streaking pulse. We calculated the plasmonic field induced by streaking pulses for 10 to 200 nm diameter Au, Ag, and Cu nanospheres and obtained streaked photoelectron spectra by employing our quantum-mechanical model. Our simulated spectra show significant oscillation-amplitude enhancements and phase shifts for all three metals (relative to spectra that are calculated without including the induced plasmonic field) and allow the reconstruction of the plasmonic field enhancements and phase shifts for each material. Supported by the US NSD-EPSCoR program, NSF, and DoE.

  9. NUclear EVacuation Analysis Code (NUEVAC) : a tool for evaluation of sheltering and evacuation responses following urban nuclear detonations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshimura, Ann S.; Brandt, Larry D.

    2009-11-01

    The NUclear EVacuation Analysis Code (NUEVAC) has been developed by Sandia National Laboratories to support the analysis of shelter-evacuate (S-E) strategies following an urban nuclear detonation. This tool can model a range of behaviors, including complex evacuation timing and path selection, as well as various sheltering or mixed evacuation and sheltering strategies. The calculations are based on externally generated, high resolution fallout deposition and plume data. Scenario setup and calculation outputs make extensive use of graphics and interactive features. This software is designed primarily to produce quantitative evaluations of nuclear detonation response options. However, the outputs have also proven usefulmore » in the communication of technical insights concerning shelter-evacuate tradeoffs to urban planning or response personnel.« less

  10. Sensitivity of The High-resolution Wam Model With Respect To Time Step

    NASA Astrophysics Data System (ADS)

    Kasemets, K.; Soomere, T.

    The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave heights over the whole Baltic Sea was 2.4 m (1 minute) and 2.04 m (3 minutes), respectively. The most probable reason of such difference is that the WAM model with a relatively large time step poorly describes wave field evolution in the Aland area with extremely ragged bottom topography and coastal line. In earlier studies, it has been reported that the WAM model frequently underestimates wave heights in the northern Baltic Proper by 20-30% in the case of strong north storms (Tuomi et al, Report series of the Finnish Institute of Marine Re- search, 1999). The described results suggest that a part of this underestimation may be removed through a proper choice of the time step.

  11. Imaging trace gases in volcanic plumes with Fabry Perot Interferometers

    NASA Astrophysics Data System (ADS)

    Kuhn, Jonas; Platt, Ulrich; Bobrowski, Nicole; Lübcke, Peter; Wagner, Thomas

    2017-04-01

    Within the last decades, progress in remote sensing of atmospheric trace gases revealed many important insights into physical and chemical processes in volcanic plumes. In particular, their evolution could be studied in more detail than by traditional in-situ techniques. A major limitation of standard techniques for volcanic trace gas remote sensing (e.g. Differential Optical Absorption Spectroscopy, DOAS) is the constraint of the measurement to a single viewing direction since they use dispersive spectroscopy with a high spectral resolution. Imaging DOAS-type approaches can overcome this limitation, but become very time consuming (of the order of minutes to record a single image) and often cannot match the timescales of the processes of interest for volcanic gas measurements (occurring at the order of seconds). Spatially resolved imaging observations with high time resolution for volcanic sulfur dioxide (SO2) emissions became possible with the introduction of the SO2-Camera. Reducing the spectral resolution to two spectral channels (using interference filters) that are matched to the SO2 absorption spectrum, the SO2-Camera is able to record full frame SO2 slant column density distributions at a temporal resolution on the order of < 1s. This for instance allows for studying variations in SO2 fluxes on very short time scales and applying them in magma dynamics models. However, the currently employed SO2-Camera technique is limited to SO2 detection and, due to its coarse spectral resolution, has a limited spectral selectivity. This limits its application to very specific, infrequently found measurement conditions. Here we present a new approach, based on matching the transmission profile of Fabry Perot Interferometers (FPIs) to periodic spectral absorption features of trace gases. The FPI's transmission spectrum is chosen to achieve a high correlation with the spectral absorption of the trace gas, allowing a high selectivity and sensitivity with still using only a few spectral channels. This would not only improve SO2 imaging, but also allow for the application of the technique to further gases of interest in volcanology (and other areas of atmospheric research). Imaging halogen species would be particularly interesting for volcanic trace gas studies. Bromine monoxide (BrO) and chlorine dioxide (OClO) both yield absorption features that allow their detection with the FPI correlation technique. From BrO and OClO data, ClO levels in the plume could be calculated. We present an outline of applications of the FPI technique to imaging a series of trace gases in volcanic plumes. Sample calculations on the sensitivity and selectivity of the technique, first proof of concept studies and proposals for technical implementations are presented.

  12. SAR and scan-time optimized 3D whole-brain double inversion recovery imaging at 7T.

    PubMed

    Pracht, Eberhard D; Feiweier, Thorsten; Ehses, Philipp; Brenner, Daniel; Roebroeck, Alard; Weber, Bernd; Stöcker, Tony

    2018-05-01

    The aim of this project was to implement an ultra-high field (UHF) optimized double inversion recovery (DIR) sequence for gray matter (GM) imaging, enabling whole brain coverage in short acquisition times ( ≈5 min, image resolution 1 mm 3 ). A 3D variable flip angle DIR turbo spin echo (TSE) sequence was optimized for UHF application. We implemented an improved, fast, and specific absorption rate (SAR) efficient TSE imaging module, utilizing improved reordering. The DIR preparation was tailored to UHF application. Additionally, fat artifacts were minimized by employing water excitation instead of fat saturation. GM images, covering the whole brain, were acquired in 7 min scan time at 1 mm isotropic resolution. SAR issues were overcome by using a dedicated flip angle calculation considering SAR and SNR efficiency. Furthermore, UHF related artifacts were minimized. The suggested sequence is suitable to generate GM images with whole-brain coverage at UHF. Due to the short total acquisition times and overall robustness, this approach can potentially enable DIR application in a routine setting and enhance lesion detection in neurological diseases. Magn Reson Med 79:2620-2628, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Assessing Aridity, Hydrological Drought, and Recovery Using GRACE and GLDAS: a Case Study in Iraq

    NASA Astrophysics Data System (ADS)

    Moradkhani, H.; Almamalachy, Y. S.; Yan, H.; Ahmadalipour, A.; Irannezhad, M.

    2016-12-01

    Iraq has suffered from several drought events during the period of 2003-2012, which imposed substantial impacts on natural environment and socioeconomic sectors, e.g. lower discharge of Tigris and Euphrates, groundwater depletion and increase in its salinity, population migration, and agricultural degradation. To investigate the aridity and climatology of Iraq, Global Land Data Assimilation System (GLDAS) monthly datasets of precipitation, temperature, and evapotranspiration at 0.25 degree spatial resolution are used. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived monthly Terrestrial Water Storage (TWS) deficit is used as the hydrological drought indicator. The data is available globally at 1 degree spatial resolution. This study aims to monitor hydrological drought and assess drought recovery time for the period of August 2002 until December 2015. Two approaches are implemented to derive the GRACE-based TWS deficit. The first approach estimates the TWS deficit based on the difference from its own climatology, while the second approach directly calculates the deficit from TWS anomaly. Severity of drought events are calculated by integrating monthly water deficit over the drought period. The results indicate that both methods are capable of capturing the severe drought events in Iraq, while the second approach quantifies higher deficit and severity. In addition, two methods are employed to assess drought recovery time based on the estimated deficit. Both methods indicate similar drought recovery times, varying from less than a month to 9 months. The results demonstrate that the GRACE TWS is a reliable indicator for drought assessment over Iraq, and provides useful information to decision makers for developing drought adaptation and mitigation strategies over data-sparse regions.

  14. From fast fluorescence imaging to molecular diffusion law on live cell membranes in a commercial microscope.

    PubMed

    Di Rienzo, Carmine; Gratton, Enrico; Beltram, Fabio; Cardarelli, Francesco

    2014-10-09

    It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn't need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.

  15. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  16. Resolving topographic detail on Venus by modeling complex Magellan altimetry echoes

    NASA Technical Reports Server (NTRS)

    Lovell, Amy J.; Schloerb, F. Peter; Mcgill, George E.

    1993-01-01

    Magellan's altimeter is providing some of the finest resolution topography of Venus achieved to date. Nevertheless, efforts continue to improve the topographic resolution whenever possible. One effort to this end is stereoscopic imaging, which provides topography at scales similar to that of the synthetic aperture radar (SAR). However, this technique requires two SAR images of the same site to be obtained and limits the utility of this method. In this paper, we present another method to resolve topographic features at scales smaller than that of an altimeter footprint, which is more globally applicable than the stereoscopic approach. Each pulse which is transmitted by Magellan's altimeter scatters from the planet and echoes to the receiver, delayed based on the distance between the spacecraft and each surface element. As resolved in time, each element of an altimetry echo represents the sum of all points on the surface which are equidistant from the spacecraft. Thus, individual returns, as a function of time, create an echo profile which may be used to derive properties of the surface, such as the scattering law or, in this case, the topography within the footprint. The Magellan project has derived some of this information by fitting model templates to radar echo profiles. The templates are calculated based on Hagfor's Law, which assumes a smooth, gently undulating surface. In most regions these templates provide a reasonable fit to the observed echo profile; however, in some cases the surface departs from these simple assumptions and more complex profiles are observed. Specifically, we note that sub-footprint topographic relief apparently has a strong effect on the shape of the echo profile. To demonstrate the effects of sub-resolution relief on echo profiles, we have calculated the echo shapes from a wide range of simple topographic models. At this point, our topographic models have emphasized surfaces where only two dominant elevations are contained within a footprint, such as graben, ridges, crater rims, and central features in impact craters.

  17. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  18. NASA/GEWEX Surface Radiation Budget: Integrated Data Product With Reprocessed Radiance, Cloud, and Meteorology Inputs, and New Surface Albedo Treatment

    NASA Technical Reports Server (NTRS)

    Cox, Stephen J.; Stackhouse, Paul W., Jr.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping

    2016-01-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  19. NASA/GEWEX Surface Radiation Budget: First Results From The Release 4 GEWEX Integrated Data Products

    NASA Astrophysics Data System (ADS)

    Stackhouse, Paul; Cox, Stephen; Gupta, Shashi; Mikovitz, J. Colleen; zhang, taiping

    2016-04-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number should help improve the RMS of the existing products and allow for future higher resolution SRB gridded product (e.g. 0.5 degree). In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  20. Measurement of the Retention Time of Different Ophthalmic Formulations with Ultrahigh-Resolution Optical Coherence Tomography.

    PubMed

    Gagliano, Caterina; Papa, Vincenzo; Amato, Roberta; Malaguarnera, Giulia; Avitabile, Teresio

    2018-04-01

    Purpose/aim of the study: The purpose of this study was to measure the pre-corneal retention time of two marketed formulations (eye drops and eye gel) of a steroid-antibiotic fixed combination (FC) containing 0.1% dexamethasone and 0.3% netilmicin. Pre-corneal retention time was evaluated in 16 healthy subjects using an ultrahigh-resolution anterior segment spectral domain optical coherence tomography (OCT). All subjects randomly received both formulations of the FC (Netildex, SIFI, Italy). Central tear film thickness (CTFT) was measured before instillation (time 0) and then after 1, 10, 20, 30, 40 50, 60 and 120 min. The pre-corneal retention time was calculated by plotting CTFT as a function of time. Differences between time points and groups were analyzed by Student's t-test. CTFT increased significantly after the instillation of the eye gel formulation (p < 0.001). CTFT reached its maximum value 1 min after instillation and returned to baseline after 60 min. No effect on CTFT was observed after the instillation of eye drops. The difference between the two formulations was statistically significant at time 1 min (p < 0.0001), 10 min (p < 0.001) and 20 min (p < 0.01). The FC formulated as eye gel was retained on the ocular surface longer than the corresponding eye drop solution. Consequently, the use of the eye gel might extend the interval between instillations and decrease the frequency of administration.

  1. Ultrafast time measurements by time-correlated single photon counting coupled with superconducting single photon detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shcheslavskiy, V., E-mail: vis@becker-hickl.de; Becker, W.; Morozov, P.

    Time resolution is one of the main characteristics of the single photon detectors besides quantum efficiency and dark count rate. We demonstrate here an ultrafast time-correlated single photon counting (TCSPC) setup consisting of a newly developed single photon counting board SPC-150NX and a superconducting NbN single photon detector with a sensitive area of 7 × 7 μm. The combination delivers a record instrument response function with a full width at half maximum of 17.8 ps and system quantum efficiency ∼15% at wavelength of 1560 nm. A calculation of the root mean square value of the timing jitter for channels withmore » counts more than 1% of the peak value yielded about 7.6 ps. The setup has also good timing stability of the detector–TCSPC board.« less

  2. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  3. CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, Natalie

    2016-09-30

    COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less

  4. Design and construction of an Offner spectrometer based on geometrical analysis of ring fields.

    PubMed

    Kim, Seo Hyun; Kong, Hong Jin; Lee, Jong Ung; Lee, Jun Ho; Lee, Jai Hoon

    2014-08-01

    A method to obtain an aberration-corrected Offner spectrometer without ray obstruction is proposed. A new, more efficient spectrometer optics design is suggested in order to increase its spectral resolution. The derivation of a new ring equation to eliminate ray obstruction is based on geometrical analysis of the ring fields for various numerical apertures. The analytical design applying this equation was demonstrated using the optical design software Code V in order to manufacture a spectrometer working in wavelengths of 900-1700 nm. The simulation results show that the new concept offers an analytical initial design taking the least time of calculation. The simulated spectrometer exhibited a modulation transfer function over 80% at Nyquist frequency, root-mean-square spot diameters under 8.6 μm, and a spectral resolution of 3.2 nm. The final design and its realization of a high resolution Offner spectrometer was demonstrated based on the simulation result. The equation and analytical design procedure shown here can be applied to most Offner systems regardless of the wavelength range.

  5. Localized surface plasmon resonance nanosensor: a high-resolution distance-dependence study using atomic layer deposition.

    PubMed

    Whitney, Alyson V; Elam, Jeffrey W; Zou, Shengli; Zinovev, Alex V; Stair, Peter C; Schatz, George C; Van Duyne, Richard P

    2005-11-03

    Atomic layer deposition (ALD) is used to deposit 1-600 monolayers of Al(2)O(3) on Ag nanotriangles fabricated by nanosphere lithography (NSL). Each monolayer of Al(2)O(3) has a thickness of 1.1 A. It is demonstrated that the localized surface plasmon resonance (LSPR) nanosensor can detect Al(2)O(3) film growth with atomic spatial resolution normal to the nanoparticle surface. This is approximately 10 times greater spatial resolution than that in our previous long-range distance-dependence study using multilayer self-assembled monolayer shells. The use of ALD enables the study of both the long- and short-range distance dependence of the LSPR nanosensor in a single unified experiment. Ag nanoparticles with fixed in-plane widths and decreasing heights yield larger sensing distances. X-ray photoelectron spectroscopy, variable angle spectroscopic ellipsometry, and quartz crystal microbalance measurements are used to study the growth mechanism. It is proposed that the growth of Al(2)O(3) is initiated by the decomposition of trimethylaluminum on Ag. Semiquantitative theoretical calculations were compared with the experimental results and yield excellent agreement.

  6. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  7. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  8. Soil erodibility in Europe: a high-resolution dataset based on LUCAS.

    PubMed

    Panagos, Panos; Meusburger, Katrin; Ballabio, Cristiano; Borrelli, Pasqualle; Alewell, Christine

    2014-05-01

    The greatest obstacle to soil erosion modelling at larger spatial scales is the lack of data on soil characteristics. One key parameter for modelling soil erosion is the soil erodibility, expressed as the K-factor in the widely used soil erosion model, the Universal Soil Loss Equation (USLE) and its revised version (RUSLE). The K-factor, which expresses the susceptibility of a soil to erode, is related to soil properties such as organic matter content, soil texture, soil structure and permeability. With the Land Use/Cover Area frame Survey (LUCAS) soil survey in 2009 a pan-European soil dataset is available for the first time, consisting of around 20,000 points across 25 Member States of the European Union. The aim of this study is the generation of a harmonised high-resolution soil erodibility map (with a grid cell size of 500 m) for the 25 EU Member States. Soil erodibility was calculated for the LUCAS survey points using the nomograph of Wischmeier and Smith (1978). A Cubist regression model was applied to correlate spatial data such as latitude, longitude, remotely sensed and terrain features in order to develop a high-resolution soil erodibility map. The mean K-factor for Europe was estimated at 0.032 thahha(-1)MJ(-1)mm(-1) with a standard deviation of 0.009 thahha(-1)MJ(-1)mm(-1). The yielded soil erodibility dataset compared well with the published local and regional soil erodibility data. However, the incorporation of the protective effect of surface stone cover, which is usually not considered for the soil erodibility calculations, resulted in an average 15% decrease of the K-factor. The exclusion of this effect in K-factor calculations is likely to result in an overestimation of soil erosion, particularly for the Mediterranean countries, where highest percentages of surface stone cover were observed. Copyright © 2014. Published by Elsevier B.V.

  9. A study on the anisole-water complex by molecular beam-electronic spectroscopy and molecular mechanics calculations.

    PubMed

    Becucci, M; Pietraperzia, G; Pasquini, M; Piani, G; Zoppi, A; Chelli, R; Castellucci, E; Demtroeder, W

    2004-03-22

    An experimental and theoretical study is made on the anisole-water complex. It is the first van der Waals complex studied by high resolution electronic spectroscopy in which the water is seen acting as an acid. Vibronically and rotationally resolved electronic spectroscopy experiments and molecular mechanics calculations are used to elucidate the structure of the complex in the ground and first electronic excited state. Some internal dynamics in the system is revealed by high resolution spectroscopy. (c) 2004 American Institute of Physics

  10. A landscape indicator approach to the identification and articulation of the consequences of land-cover change in the Mid-Atlantic Region, 1973-2001

    USGS Publications Warehouse

    Slonecker, E. Terrence; Milheim, Lesley E.; Claggett, Peter

    2009-01-01

    Landscape indicators, derived from land-use and land-cover data, hydrology, nitrate deposition, and elevation data, were used by Jones and others (2001a) to calculate the ecological consequences of land-cover change. Nitrate loading and physical bird habitat were modeled from 1973 and 1992 land-cover and other spatial data for the Mid-Atlantic region. Utilizing the same methods, this study extends the analysis another decade with the use of the 2001 National Land Cover Dataset. Land-cover statistics and trends are calculated for three time periods: 1973-1992, 1992-2001 and 1973-2001. In addition, high-resolution aerial photographs (1 meter or better ground-sample distance) were acquired and analyzed for thirteen pairs of adjacent USGS 7.5 minute quadrangle maps in areas where distinct positive or negative changes to nitrogen loading and bird habitat were previously calculated. During the entire 30 year period, the data show that there was extensive loss of agriculture and forest area and a major increase in urban land-cover classes. However, the majority of the conversion of other classes to urban occurred during the 1992-2001 period. During the 1973-1992 period, there was only moderate increase in urban area, while there was an inverse relationship between agricultural change and forest change. In general, forest gain and agricultural loss was found in areas of improving landscape indicators, and forest loss and agricultural gain was found to occur in areas of declining indicators related to habitat and nitrogen loadings, which was generally confirmed by the aerial photographic analysis. In terms of the specific model results, bird habitat, which is mainly related to the extent of forest cover, declined overall with forest extent, but was also affected more in the decline of habitat quality. Nitrate loading, which is mainly related to agricultural land cover actually improved from 1992-2001, and in the overall study, mainly due to the conversion of agriculture to forests and urban. The high-resolution imagery analysis was significant in that it confirmed, at a very local level, the specific land-cover changes that were driving the landscape metrics and model results that were calculated from moderate resolution land-cover data and models. These were generally subtle changes in patch size of agriculture, forest, and urban areas, but had substantial effects on bird habitat and nitrogen loadings. This analysis of high-resolution imagery demonstrates and confirms the important ability of moderate-resolution land-cover data to capture significant landscape-level activity that is directly related to specific metrics of ecological significance. It also demonstrates consistent landscape-scale relationships between data derived from high-resolution, moderate-resolution and landscape-model sources. Finally, many of the areas of improvement and decline in bird habitat and nitrogen loadings appear to be potentially regional in nature and likely reflect some local trend in landscape activity. Although the use of ecoregions as sampling units has been criticized in recent years, these results show that basic changes in Level 1 land-cover categories, such as forest and agriculture, may still reflect ecoregional patterns and considerations at some scale of mapping and analysis. This is a potentially important area for future landscape-indicator research. This and other follow-on research opportunities are discussed.

  11. Agricultural Recharge Practices for Managing Nitrate in Regional Groundwater: Time-Resolution Assessment of Numerical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Bastani, M.; Harter, T.

    2017-12-01

    Intentional recharge practices in irrigated landscapes are promising options to control and remediate groundwater quality degradation with respect to nitrate. To better understand the effect of these practices, a fully 3D transient heterogeneous transport model simulation is developed using MODFLOW and MT3D. The model is developed for a long-term study of nitrate improvements in an alluvial groundwater basin in Eastern San Joaquin Valley, CA. Different scenarios of agricultural recharge strategies including crop type change and winter flood flows are investigated. Transient simulations with high spatio-temporal resolutions are performed. We then consider upscaling strategies that would allow us to simplify the modeling process such that it can be applied at a very large basin-scale (1000s of square kilometers) for scenario analysis. We specifically consider upscaling of time-variant boundary conditions (both internal and external) that have significant influence on calculation cost of the model. We compare monthly transient stresses to upscaled annual and further upscaled average steady-state stresses on nitrate transport in groundwater under recharge scenarios.

  12. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  13. Influence of heteroatom pre-selection on the molecular formula assignment of soil organic matter components determined by ultrahigh resolution mass spectrometry.

    PubMed

    Ohno, Tsutomu; Ohno, Paul E

    2013-04-01

    Soil organic matter (SOM) is involved in many important ecosystem processes. Ultrahigh resolution mass spectrometry has become a powerful technique in the chemical characterization of SOM, allowing assignment of elemental formulae for thousands of peaks resolved in a typical mass spectrum. We investigated how the addition of N, S, and P heteroatoms in the formula calculation stage of the mass spectra processing workflow affected the formula assignments of mass spectra peaks. Dissolved organic matter extracted from plant biomass and soil as well as the soil humic acid fraction was studied. We show that the addition of S and P into the molecular formula calculation increased peak assignments on average by 17.3 % and 10.7 %, respectively, over the assignments based on the CHON elements frequently reported by SOM researchers using ultrahigh resolution mass spectrometry. The organic matter chemical characteristics as represented by van Krevelen diagrams were appreciably affected by differences in the heteroatom pre-selection for the three organic matter samples investigated, especially so for the wheat-derived dissolved organic matter. These results show that inclusion of both S and P heteroatoms into the formula calculation step, which is not routinely done, is important to obtain a more chemically complete interpretation of the ultrahigh resolution mass spectra of SOM.

  14. Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2015-01-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brajnik, G., E-mail: gabriele.brajnik@elettra.eu; Carrato, S.; Bassanese, S.

    At Elettra, the Italian synchrotron light source, an internal project has been started to develop an electron beam position monitor capable of achieving sub-micron resolution with a self-compensation feature. In order to fulfil these requirements, a novel RF front end has been designed. A high isolation coupler combines the input signals with a known pilot tone which is generated by the readout system. This allows the parameters of the four channels to be continuously calibrated, by compensating the different responses of each channel. A similar technique is already known, but for the first time experimental results have shown the improvementmore » in resolution due to this method. The RF chain was coupled with a 4-channel digitizer based on 160 MHz, 16 bits ADCs and an Altera Stratix FPGA. At first, no additional processing was done in the FPGA, collecting only the raw data from the ADCs; the position was calculated through the FFT of each signal. A simulation was also performed to verify the analytic relation between spatial resolution and signal-to-noise ratio; this was very useful to better understand the behaviour of the system with different sources of noise (aperture jitter, thermal noise, etc.). The experimental data were compared with the simulation, showing indeed a perfect agreement with the latter and confirming the capability of the system to reach sub-micrometric accuracy. Therefore, the use of the pilot tone greatly improves the quality of the system, correcting the drifts and increasing the spatial resolution by a factor of 4 in a time window of 24 hours.« less

  16. Quantifying the ultrastructure of carotid arteries using high-resolution micro-diffusion tensor imaging—comparison of intact versus open cut tissue

    NASA Astrophysics Data System (ADS)

    Salman Shahid, Syed; Gaul, Robert T.; Kerskens, Christian; Flamini, Vittoria; Lally, Caitríona

    2017-12-01

    Diffusion magnetic resonance imaging (dMRI) can provide insights into the microstructure of intact arterial tissue. The current study employed high magnetic field MRI to obtain ultra-high resolution dMRI at an isotropic voxel resolution of 117 µm3 in less than 2 h of scan time. A parameter selective single shell (128 directions) diffusion-encoding scheme based on Stejskel-Tanner sequence with echo-planar imaging (EPI) readout was used. EPI segmentation was used to reduce the echo time (TE) and to minimise the susceptibility-induced artefacts. The study utilised the dMRI analysis with diffusion tensor imaging (DTI) framework to investigate structural heterogeneity in intact arterial tissue and to quantify variations in tissue composition when the tissue is cut open and flattened. For intact arterial samples, the region of interest base comparison showed significant differences in fractional anisotropy and mean diffusivity across the media layer (p  <  0.05). For open cut flat samples, DTI based directionally invariant indices did not show significant differences across the media layer. For intact samples, fibre tractography based indices such as calculated helical angle and fibre dispersion showed near circumferential alignment and a high degree of fibre dispersion, respectively. This study demonstrates the feasibility of fast dMRI acquisition with ultra-high spatial and angular resolution at 7 T. Using the optimised sequence parameters, this study shows that DTI based markers are sensitive to local structural changes in intact arterial tissue samples and these markers may have clinical relevance in the diagnosis of atherosclerosis and aneurysm.

  17. Effect of injection rate on contrast-enhanced MR angiography image quality: Modulation transfer function analysis.

    PubMed

    Clark, Toshimasa J; Wilson, Gregory J; Maki, Jeffrey H

    2017-07-01

    Contrast-enhanced (CE)-MRA optimization involves interactions of sequence duration, bolus timing, contrast recirculation, and both R 1 relaxivity and R2*-related reduction of signal. Prior data suggest superior image quality with slower gadolinium injection rates than typically used. A computer-based model of CE-MRA was developed, with contrast injection, physiologic, and image acquisition parameters varied over a wide gamut. Gadolinium concentration was derived using Verhoeven's model with recirculation, R 1 and R2* calculated at each time point, and modulation transfer curves used to determine injection rates, resulting in optimal resolution and image contrast for renal and carotid artery CE-MRA. Validation was via a vessel stenosis phantom and example patients who underwent carotid CE-MRA with low effective injection rates. Optimal resolution for renal and carotid CE-MRA is achieved with injection rates between 0.5 to 0.9 mL/s and 0.2 to 0.3 mL/s, respectively, dependent on contrast volume. Optimal image contrast requires slightly faster injection rates. Expected signal-to-noise ratio varies with both contrast volume and cardiac output. Simulated vessel phantom and clinical carotid CE-MRA exams at an effective contrast injection rate of 0.4 to 0.5 mL/s demonstrate increased resolution. Optimal image resolution is achieved at intuitively low, effective injection rates (0.2-0.9 mL/s, dependent on imaging parameters and contrast injection volume). Magn Reson Med 78:357-369, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. Optical tomography of human skin with subcellular spatial and picosecond time resolution using intense near infrared femtosecond laser pulses

    NASA Astrophysics Data System (ADS)

    Koenig, Karsten; Wollina, Uwe; Riemann, Iris; Peukert, Christiane; Halbhuber, Karl-Juergen; Konrad, Helga; Fischer, Peter; Fuenfstueck, Veronika; Fischer, Tobias W.; Elsner, Peter

    2002-06-01

    We describe the novel high resolution imaging tool DermaInspect 100 for non-invasive diagnosis of dermatological disorders based on multiphoton autofluorescence imaging (MAI)and second harmonic generation. Femtosecond laser pulses in the spectral range of 750 nm to 850 nm have been used to image in vitro and in vivo human skin with subcellular spatial and picosecond temporal resolution. The non-linear induced autofluorescence originates mainly from naturally endogenous fluorophores/protein structures like NAD(P)H, flavins, keratin, collagen, elastin, porphyrins and melanin. Second harmonic generation was observed in the stratum corneum and in the dermis. The system with a wavelength-tunable compact 80 MHz Ti:sapphire laser, a scan module with galvo scan mirrors, piezoelectric objective positioner, fast photon detector and time-resolved single photon counting unit was used to perform optical sectioning and 3D autofluorescence lifetime imaging (t-mapping). In addition, a modified femtosecond laser scanning microscope was involved in autofluorescence measurements. Tissues of patients with psoriasis, nevi, dermatitis, basalioma and melanoma have been investigated. Individual cells and skin structures could be clearly visualized. Intracellular components and connective tissue structures could be further characterized by tuning the excitation wavelength in the range of 750 nm to 850 nm and by calculation of mean fluorescence lifetimes per pixel and of particular regions of interest. The novel non-invasive imaging system provides 4D (x,y,z,t) optical biopsies with subcellular resolution and offers the possibility to introduce a further optical diagnostic method in dermatology.

  19. Influence of dipolar interactions on the superparamagnetic relaxation time of γ-Fe2O3

    NASA Astrophysics Data System (ADS)

    Labzour, A.; Housni, A.; Limame, K.; Essahlaoui, A.; Sayouri, S.

    2017-03-01

    Influence of dipolar interactions on the Néel superparamagnetic relaxation time, τ , of an assembly of ultrafine ferromagnetic particles (γ-Fe2O3 ) with uniaxial anisotropy and of different sizes has been widely studied using Mössbauer technique. These studies, based on different analytical approaches, have shown that τ decreases with increasing interactions between particles. To interpret these results, we propose a model where interaction effects are considered as being due to a constant and external randomly oriented magnetic field B(Ψ, ϕ). The model is based on the resolution of the Fokker-Planck equation (FPE), generalizes previous calculations and gives satisfactory interpretation of the relaxation phenomenon in such systems.

  20. Depth-resolved monitoring of analytes diffusion in ocular tissues

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.

    2007-02-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.

  1. Feasibility of measuring temperature and density fluctuations in air using laser-induced O2 fluorescence

    NASA Technical Reports Server (NTRS)

    Massey, G. A.; Lemon, C. J.

    1984-01-01

    A tunable line-narrowed ArF laser can selectively excite several rotation al lines of the Schumann-Runge band system of O2 in air. The resulting ultraviolet fluorescence can be monitored at 90 deg to the laser beam axis, permitting space and time resolved observation of density and temperature fluctuations in turbulence. Experiments and calculations show that + or - 1 K, + or - 1 percent density, 1 cu mm spatial, and 1 microsecond temporal resolution can be achieved simultaneously under some conditions.

  2. The J = 1 para levels of the v = 0 to 6 np singlet Rydberg series of molecular hydrogen revisited.

    PubMed

    Glass-Maujean, M; Schmoranzer, H; Haar, I; Knie, A; Reiss, P; Ehresmann, A

    2012-04-07

    The energies and the widths of the J = 1 para levels of the v = 0 to 6 Rydberg np singlet series of molecular hydrogen with absolute intensities of the R(0) and P(2) absorption lines were measured by a high - resolution synchrotron radiation experiment and calculated through a full ab initio multichannel quantum defect theory approach. On the basis of the agreement between theory and experiment, 31 levels were either reassigned or assigned for the first time.

  3. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  4. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  5. Global warming stops in Altai and Northern Mongolia in 2010-2015.

    NASA Astrophysics Data System (ADS)

    Darin, A.; Kalugin, I.; Maksimov, M.

    2010-03-01

    We studied the cores of bottom sediments of Lake Teletskoe (Mountain Altai) [1] and Lake Telmen (Northern Mongolia) [2]. The method of constructing the forecast includes the following steps: 1) Geochemical analysis of lakes bottom sediment cores with spatial resolution 0.1 mm using synchrotron radiation [3]. It corresponds to the time resolution ~ 0.2-0.3 year (sedimentation rates are equal 0.51 mm/year for Teletskoe Lake and 0.64 mm/year for Telmen Lake). 2) Creating a time series of geochemical indicators of climate change.We used the following geochemical proxies: Ti, Br, Rb, Sr, Mo contents and X-ray density. 3) Calibration transfer functions on the regional meteodata during the last 80-120 years. Regression equation such as: annual T = function (proxy) were calculated. 4) Reconstruction of climatic parameters on the depth of the core. Annual temperature change for the Altai region (0 - 3000 years ago) and Northern Mongolia region (0 - 2000 years ago) have been reconstructed with time resolution ~ 0.2-0.3 year. 5) A Fourier analysis showed the same frequency of climate change for both regions. Have been identified as the main periods (frequency): 2750, 1500, 1015, 825, 615, 500, 375, 325, 290, 230, 215, 203, 190, 157, 135, 109, 88, 65, 48, 37, 24 and 10 years. The sum of 22 sinusoid correlates with the reconstruction of annual temperature with the coefficient +0.87 (for more than 3000 points). 6) Based on the discovered periodicities forecast the environment change for the period 2010-2050 was calculated. According to our estimates at this time is expected sharp fall of annual regional temperature. The study was funded by grant 09-05-13505 from the Russian Foundation for Basic Research, by grant 92 from the Siberian Branch of the Russian Academy of Sciences. [1] I.A.Kalugin et all. Rhythmic fine-grained sediment deposition in Lake Teletskoye... Quaternary International, 136 (2005), 5-13. [2] S. J. Fowell et all. Mid to late Holocene climate evolution of the Lake Telmen Basin . . . // Quaternary Research 59 (2003) 353-363 [3] A. Daryin et all. Use of a scanning XRF analysis on SR beams from VEPP-3 storage ...// Nucl. Instrum. and Methods in Physics Research A 543 (2005) 255-258.

  6. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems, task 1: Ducted propfan analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1990-01-01

    The time-dependent three-dimensional Euler equations of gas dynamics were solved numerically to study the steady compressible transonic flow about ducted propfan propulsion systems. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. An implicit residual smoothing operator was used to aid convergence. Two calculation grids were employed in this study. The first grid utilized an H-type mesh network with a branch cut opening to represent the axisymmetric cowl. The second grid utilized a multiple-block mesh system with a C-type grid about the cowl. The individual blocks were numerically coupled in the Euler solver. Grid systems were generated by a combined algebraic/elliptic algortihm developed specifically for ducted propfans. Numerical calculations were initially performed for unducted propfans to verify the accuracy of the three-dimensional Euler formulation. The Euler analyses were then applied for the calculation of ducted propfan flows, and predicted results were compared with experimental data for two cases. The three-dimensional Euler analyses displayed exceptional accuracy, although certain parameters were observed to be very sensitive to geometric deflections. Both solution schemes were found to be very robust and demonstrated nearly equal efficiency and accuracy, although it was observed that the multi-block C-grid formulation provided somewhat better resolution of the cowl leading edge region.

  7. CALCULATIONS OF SHUTDOWN DOSE RATE FOR THE TPR SPECTROMETER OF THE HIGH-RESOLUTION NEUTRON SPECTROMETER FOR ITER.

    PubMed

    Wójcik-Gargula, A; Tracz, G; Scholz, M

    2017-12-13

    This work presents results of the calculations performed in order to predict the neutron-induced activity in structural materials that are considered to be using at the TPR spectrometer-one of the detection system of the High-Resolution Neutron Spectrometer for ITER. An attempt has been made to estimate the shutdown dose rates in a Cuboid #1 and to check if they satisfy ICRP regulatory requirements for occupational exposure to radiation and ITER nuclear safety regulations for areas with personal access. The results were obtained by the MCNP and FISPACT-II calculations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. 12 CFR 997.4 - Calculation of the quarterly present-value determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Calculation of the quarterly present-value determination. 997.4 Section 997.4 Banks and Banking FEDERAL HOUSING FINANCE BOARD NON-BANK SYSTEM ENTITIES RESOLUTION FUNDING CORPORATION OBLIGATIONS OF THE BANKS § 997.4 Calculation of the quarterly present-value...

  9. Soft Real-Time PID Control on a VME Computer

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  10. Real-time digital heterodyne interferometer for high resolution plasma density measurements at ISTTOK.

    PubMed

    Marques, T G; Gouveia, A; Pereira, T; Fortunato, J; Carvalho, B B; Sousa, J; Silva, C; Fernandes, H

    2008-10-01

    With the implementation of alternating discharges (ac) at the ISTTOK tokamak, the typical duration of the discharges increased from 35 to 250 ms. This time increase created the need for a real-time electron density measurement in order to control the plasma fueling. The diagnostic chosen for the real-time calculation was the microwave interferometer. The ISTTOK microwave interferometer is a heterodyne system with quadrature detection and a probing frequency of 100 GHz (lambda(0)=3 mm). In this paper, a low-cost approach for real-time diagnostic using a digital signal programmable intelligent computer embedded system is presented, which allows the measurement of the phase with a 1% fringe accuracy in less than 6 micros. The system increases its accuracy by digitally correcting the offsets of the input signals and making use of a judicious lookup table optimized to improve the nonlinear behavior of the transfer curve. The electron density is determined at a rate of 82 kHz (limited by the analog to digital converter), and the data are transmitted for each millisecond although this last parameter could be much lower (around 12 micros--each value calculated is transmitted). In the future, this same system is expected to control plasma actuators, such as the piezoelectric valve of the hydrogen injection system responsible for the plasma fueling.

  11. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    NASA Astrophysics Data System (ADS)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  12. Integrated optics to improve resolution on multiple configuration

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei

    2015-04-01

    Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.

  13. Global Flood Response Using Satellite Rainfall Information Coupled with Land Surface and Routing Models

    NASA Astrophysics Data System (ADS)

    Adler, R. F.; Wu, H.

    2016-12-01

    The Global Flood Monitoring System (GFMS) (http://flood.umd.edu) has been developed and used in recent years to provide real-time flood detection, streamflow estimates and inundation calculations for most of the globe. The GFMS is driven by satellite-based precipitation, with the accuracy of the flood estimates being primarily dependent on the accuracy of the precipitation analyses and the land surface and routing models used. The routing calculations are done at both 12 km and 1 km resolution. Users of GFMS results include international and national flood response organizations. The devastating floods in October 2015 in South Carolina are analyzed indicating that the GFMS estimated streamflow is accurate and useful indicating significant flooding in the upstream basins. Further downstream the GFMS streamflow underestimates due to the presence of dams which are not accounted for in GFMS. Other examples are given for Yemen and Somalia and for Sri Lanka and southern India. A forecast flood event associated with a typhoon hitting Taiwan is also examined. One-kilometer resolution inundation mapping from GFMS holds the promise of highly useful information for flood disaster response. The algorithm is briefly described and examples are shown for recent cases where inundation estimates available from optical and Synthetic Aperture Radar (SAR) satellite sensors are available. For a case of significant flooding in Texas in May and June along the Brazos River the GFMS calculated streamflow compares favorably with the observed. Available Landsat-based (May 28) and MODIS-based (June 2) inundation analyses from U. of Colorado shows generally good agreement with the GFMS inundation calculation in most of the area where skies were clear and the optical techniques could be applied. The GFMS provides very useful disaster response information on a timely basis. However, there is still significant room for improvement, including improved precipitation information from NASA's Global Precipitation Measurement (GPM) mission, inclusion of dam algorithms in the routing model and integration with or assimilation of observed flood extent from satellite optical and SAR sensors.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, Michaël, E-mail: michael.rey@univ-reims.fr; Tyuterev, Vladimir G.; Nikitin, Andrei V.

    Accurate variational high-resolution spectra calculations in the range 0-8000 cm{sup −1} are reported for the first time for the monodeutered methane ({sup 12}CH{sub 3}D). Global calculations were performed by using recent ab initio surfaces for line positions and line intensities derived from the main isotopologue {sup 12}CH{sub 4}. Calculation of excited vibrational levels and high-J rovibrational states is described by using the normal mode Eckart-Watson Hamiltonian combined with irreducible tensor formalism and appropriate numerical procedures for solving the quantum nuclear motion problem. The isotopic H→D substitution is studied in details by means of symmetry and nonlinear normal mode coordinate transformations.more » Theoretical spectra predictions are given up to J = 25 and compared with the HITRAN 2012 database representing a compilation of line lists derived from analyses of experimental spectra. The results are in very good agreement with available empirical data suggesting that a large number of yet unassigned lines in observed spectra could be identified and modeled using the present approach.« less

  15. Conformational distribution of baclofen analogues by 1H and 13C NMR analysis and ab initio HF MO STO-3G or STO-3G* calculations

    NASA Astrophysics Data System (ADS)

    Vaccher, Claude; Berthelot, Pascal; Debaert, Michel; Vermeersch, Gaston; Guyon, René; Pirard, Bernard; Vercauteren, Daniel P.; Dory, Magdalena; Evrard, Guy; Durant, François

    1993-12-01

    The conformations of 3-(substituted furan-2-yl) and 3-(substituted thien-2-yl)-γ-aminobutyric acid 1-9 in solution (D 2O) are estimated from high-resolution (300 MHz) 1H NMR coupling data. Conformations and populations of conformers are calculated by means of a modified Karplus-like relationship for the vicinal coupling constants. The results are compared with X-ray crystallographic investigations (torsion angles) and ab initio HF MO ST-3G or STO-3G* calculations. 1H NMR spectral analysis shows how 1-9 in solution retain the preferred g- conformation around the C3C4 bond, as found in the solid state, while a partial rotation is set up around the C2C3 bond: the conformations about C2C3 are all highly populated in solution. The 13C spin-lattice relaxation times are also discussed.

  16. Efficient calculation of beyond RPA correlation energies in the dielectric matrix formalism

    NASA Astrophysics Data System (ADS)

    Beuerle, Matthias; Graf, Daniel; Schurkus, Henry F.; Ochsenfeld, Christian

    2018-05-01

    We present efficient methods to calculate beyond random phase approximation (RPA) correlation energies for molecular systems with up to 500 atoms. To reduce the computational cost, we employ the resolution-of-the-identity and a double-Laplace transform of the non-interacting polarization propagator in conjunction with an atomic orbital formalism. Further improvements are achieved using integral screening and the introduction of Cholesky decomposed densities. Our methods are applicable to the dielectric matrix formalism of RPA including second-order screened exchange (RPA-SOSEX), the RPA electron-hole time-dependent Hartree-Fock (RPA-eh-TDHF) approximation, and RPA renormalized perturbation theory using an approximate exchange kernel (RPA-AXK). We give an application of our methodology by presenting RPA-SOSEX benchmark results for the L7 test set of large, dispersion dominated molecules, yielding a mean absolute error below 1 kcal/mol. The present work enables calculating beyond RPA correlation energies for significantly larger molecules than possible to date, thereby extending the applicability of these methods to a wider range of chemical systems.

  17. System Design of One-chip Wave Particle Interaction Analyzer for SCOPE mission.

    NASA Astrophysics Data System (ADS)

    Fukuhara, Hajime; Ueda, Yoshikatsu; Kojima, Hiro; Yamakawa, Hiroshi

    In past science spacecrafts such like GEOTAIL, we usually capture electric and magnetic field waveforms and observe energetic eletron and ion particles as velocity distributions by each sensor. We analyze plasma wave-particle interactions by these respective data and the discussions are sometimes restricted by the difference of time resolution and by the data loss in desired regions. One-chip Wave Particle Interaction Analyzer (OWPIA) conducts direct quantitative observations of wave-particle interaction by direct 'E dot v' calculation on-board. This new instruments have a capability to use all plasma waveform data and electron particle informations. In the OWPIA system, we have to calibrate the digital observation data and transform the same coordinate system. All necessary calculations are processed in Field Programmable Gate Array(FPGA). In our study, we introduce a basic concept of the OWPIA system and a optimization method for each calculation functions installed in FPGA. And we also discuss the process speed, the FPGA utilization efficiency, the total power consumption.

  18. Active x-ray optics for Generation-X, the next high resolution x-ray observatory

    NASA Astrophysics Data System (ADS)

    Elvis, Martin; Brissenden, R. J.; Fabbiano, G.; Schwartz, D. A.; Reid, P.; Podgorski, W.; Eisenhower, M.; Juda, M.; Phillips, J.; Cohen, L.; Wolk, S.

    2006-06-01

    X-rays provide one of the few bands through which we can study the epoch of reionization, when the first galaxies, black holes and stars were born. To reach the sensitivity required to image these first discrete objects in the universe needs a major advance in X-ray optics. Generation-X (Gen-X) is currently the only X-ray astronomy mission concept that addresses this goal. Gen-X aims to improve substantially on the Chandra angular resolution and to do so with substantially larger effective area. These two goals can only be met if a mirror technology can be developed that yields high angular resolution at much lower mass/unit area than the Chandra optics, matching that of Constellation-X (Con-X). We describe an approach to this goal based on active X-ray optics that correct the mid-frequency departures from an ideal Wolter optic on-orbit. We concentrate on the problems of sensing figure errors, calculating the corrections required, and applying those corrections. The time needed to make this in-flight calibration is reasonable. A laboratory version of these optics has already been developed by others and is successfully operating at synchrotron light sources. With only a moderate investment in these optics the goals of Gen-X resolution can be realized.

  19. High-resolution measurement of a bottlenose dolphin's (Tursiops truncatus) biosonar transmission beam pattern in the horizontal plane.

    PubMed

    Finneran, James J; Branstetter, Brian K; Houser, Dorian S; Moore, Patrick W; Mulsow, Jason; Martin, Cameron; Perisho, Shaun

    2014-10-01

    Previous measurements of toothed whale echolocation transmission beam patterns have utilized few hydrophones and have therefore been limited to fine angular resolution only near the principal axis or poor resolution over larger azimuthal ranges. In this study, a circular, horizontal planar array of 35 hydrophones was used to measure a dolphin's transmission beam pattern with 5° to 10° resolution at azimuths from -150° to +150°. Beam patterns and directivity indices were calculated from both the peak-peak sound pressure and the energy flux density. The emitted pulse became smaller in amplitude and progressively distorted as it was recorded farther off the principal axis. Beyond ±30° to 40°, the off-axis signal consisted of two distinct pulses whose difference in time of arrival increased with the absolute value of the azimuthal angle. A simple model suggests that the second pulse is best explained as a reflection from internal structures in the dolphin's head, and does not implicate the use of a second sound source. Click energy was also more directional at the higher source levels utilized at longer ranges, where the center frequency was elevated compared to that of the lower amplitude clicks used at shorter range.

  20. Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument

    NASA Astrophysics Data System (ADS)

    Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.

    2017-12-01

    Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.

  1. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  2. Evaluation of myocardial defect detection between parallel-hole and fan-beam SPECT using the Hotelling trace

    NASA Astrophysics Data System (ADS)

    Wollenweber, S. D.; Tsui, B. M. W.; Lalush, D. S.; Frey, E. C.; Gullberg, G. T.

    1998-08-01

    The objective of this study was to implement the Hotelling trace (HT) to evaluate the potential increase in defect detection in myocardial SPECT using high-resolution fan-beam (HRF) versus parallel-hole (HRP) collimation and compare results to a previously reported human observer study (G.K. Gregoriou et al., ibid., vol. 42, p. 1267-75, 1995). Projection data from the 3D MCAT torso phantom were simulated including the effects of attenuation, collimator-detector response blurring and scatter. Poisson noise fluctuations were then simulated. The HRP and HRF collimators had the same spatial resolution at 20 cm. The total counts in the projection data sets were proportional to the detection efficiencies of the collimators and on the order of that found in clinical Tc-99m studies. In six left-ventricular defect locations, the HT found for HRF was superior to that for HRP collimation. For HRF collimation, the HT was calculated for reconstructed images using 64/spl times/64, 128/spl times/128 and 192/spl times/192 grid sizes. The results demonstrate substantial improvement in myocardial defect detection when the grid size was increased from 64/spl times/64 to 128/spl times/128 and slight improvement from 128/spl times/128 to 192/spl times/192. Also, the performance of the Hotelling observer in terms of the HT at the different grid sizes correlates at better than 0.95 to that found in human observers in a previously reported observer experiment and ROC study.

  3. Rat brain imaging using full field optical coherence microscopy with short multimode fiber probe

    NASA Astrophysics Data System (ADS)

    Sato, Manabu; Saito, Daisuke; Kurotani, Reiko; Abe, Hiroyuki; Kawauchi, Satoko; Sato, Shunichi; Nishidate, Izumi

    2017-02-01

    We demonstrated FF OCM(full field optical coherence microscopy) using an ultrathin forward-imaging SMMF (short multimode fiber) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length, which is a typical graded-index multimode fiber for optical communications. The axial resolution was measured to be 2.20 μm, which is close to the calculated axial resolution of 2.06 μm. The lateral resolution was evaluated to be 4.38 μm using a test pattern. Assuming that the FWHM of the contrast is the DOF (depth of focus), the DOF of the signal is obtained at 36 μm and that of the OCM is 66 μm. The contrast of the OCT images was 6.1 times higher than that of the signal images due to the coherence gate. After an euthanasia the rat brain was resected and cut at 2.6mm tail from Bregma. Contacting SMMF to the primary somatosensory cortex and the agranular insular cortex of ex vivo brain, OCM images of the brain were measured 100 times with 2μm step. 3D OCM images of the brain were measured, and internal structure information was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in full-field OCM has been demonstrated.

  4. Fast range estimation based on active range-gated imaging for coastal surveillance

    NASA Astrophysics Data System (ADS)

    Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang

    2012-11-01

    Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.

  5. Analysis of Ultra High Resolution Sea Surface Temperature Level 4 Datasets

    NASA Technical Reports Server (NTRS)

    Wagner, Grant

    2011-01-01

    Sea surface temperature (SST) studies are often focused on improving accuracy, or understanding and quantifying uncertainties in the measurement, as SST is a leading indicator of climate change and represents the longest time series of any ocean variable observed from space. Over the past several decades SST has been studied with the use of satellite data. This allows a larger area to be studied with much more frequent measurements being taken than direct measurements collected aboard ship or buoys. The Group for High Resolution Sea Surface Temperature (GHRSST) is an international project that distributes satellite derived sea surface temperatures (SST) data from multiple platforms and sensors. The goal of the project is to distribute these SSTs for operational uses such as ocean model assimilation and decision support applications, as well as support fundamental SST research and climate studies. Examples of near real time applications include hurricane and fisheries studies and numerical weather forecasting. The JPL group has produced a new 1 km daily global Level 4 SST product, the Multiscale Ultrahigh Resolution (MUR), that blends SST data from 3 distinct NASA radiometers: the Moderate Resolution Imaging Spectroradiometer (MODIS), the Advanced Very High Resolution Radiometer (AVHRR), and the Advanced Microwave Scanning Radiometer ? Earth Observing System(AMSRE). This new product requires further validation and accuracy assessment, especially in coastal regions.We examined the accuracy of the new MUR SST product by comparing the high resolution version and a lower resolution version that has been smoothed to 19 km (but still gridded to 1 km). Both versions were compared to the same data set of in situ buoy temperature measurements with a focus on study regions of the oceans surrounding North and Central America as well as two smaller regions around the Gulf Stream and California coast. Ocean fronts exhibit high temperature gradients (Roden, 1976), and thus satellite data of SST can be used in the detection of these fronts. In this case, accuracy is less of a concern because the primary focus is on the spatial derivative of SST. We calculated the gradients for both versions of the MUR data set and did statistical comparisons focusing on the same regions.

  6. High resolution X-ray spectra of solar flares. V - Interpretation of inner-shell transitions in Fe XX-Fe XXIII

    NASA Technical Reports Server (NTRS)

    Doschek, G. A.; Feldman, U.; Cowan, R. D.

    1981-01-01

    The paper examines high-resolution solar flare iron line spectra recorded between 1.82 and 1.97 A by a spectrometer flown by the Naval Research Laboratory on an Air Force spacecraft launched on 1979 February 24. The emission line spectrum is due to inner-shell transitions in the ions Fe XX-Fe XXV. Using theoretical spectra and calculations of line intensities obtained by methods discussed by Merts, Cowan, and Magee (1976), electron temperatures as a function of time for two large class X flares are derived. These temperatures are deduced from intensities of lines of Fe XXII, Fe XXIII, and Fe XXIV. The determination of the differential emission measure between about 12-million and 20-million K using these temperatures is considered. The possibility of determining electron densities in flare and tokamak plasmas using the inner-shell spectra of Fe XXI and Fe XX is discussed.

  7. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  8. Holographic line field en-face OCT with digital adaptive optics in the retina in vivo.

    PubMed

    Ginner, Laurin; Schmoll, Tilman; Kumar, Abhishek; Salas, Matthias; Pricoupenko, Nastassia; Wurster, Lara M; Leitgeb, Rainer A

    2018-02-01

    We demonstrate a high-resolution line field en-face time domain optical coherence tomography (OCT) system using an off-axis holography configuration. Line field en-face OCT produces high speed en-face images at rates of up to 100 Hz. The high frame rate favors good phase stability across the lateral field-of-view which is indispensable for digital adaptive optics (DAO). Human retinal structures are acquired in-vivo with a broadband light source at 840 nm, and line rates of 10 kHz to 100 kHz. Structures of different retinal layers, such as photoreceptors, capillaries, and nerve fibers are visualized with high resolution of 2.8 µm and 5.5 µm in lateral directions. Subaperture based DAO is successfully applied to increase the visibility of cone-photoreceptors and nerve fibers. Furthermore, en-face Doppler OCT maps are generated based on calculating the differential phase shifts between recorded lines.

  9. Evaluating galactic habitability using high-resolution cosmological simulations of galaxy formation

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam

    2017-01-01

    We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which has sufficient metallicity to form terrestrial planets without being subject to hazardous radiation. These simulations allow us to make substantial progress in mapping out the asymmetric three-dimensional GHZ and its time evolution for the Milky Way (MW) and Triangulum (M33) galaxies, as opposed to works that generally assume an azimuthally symmetric GHZ. Applying typical habitability metrics to MW and M33, we find that while a large number of habitable planets exist as close as a few kiloparsecs from the galactic centre, the probability of individual planetary systems being habitable rises as one approaches the edge of the stellar disc. Tidal streams and satellite galaxies also appear to be fertile grounds for habitable planet formation. In short, we find that both galaxies arrive at similar GHZs by different evolutionary paths, as measured by the first and third quartiles of surviving biospheres. For the MW, this interquartile range begins as a narrow band at large radii, expanding to encompass much of the Galaxy at intermediate times before settling at a range of 2-13 kpc. In the case of M33, the opposite behaviour occurs - the initial and final interquartile ranges are quite similar, showing gradual evolution. This suggests that Galaxy assembly history strongly influences the time evolution of the GHZ, which will affect the relative time lag between biospheres in different galactic locations. We end by noting the caveats involved in such studies and demonstrate that high-resolution cosmological simulations will play a vital role in understanding habitability on galactic scales, provided that these simulations accurately resolve chemical evolution.

  10. A multiyear, global gridded fossil fuel CO2 emission data product: Evaluation and analysis of results

    NASA Astrophysics Data System (ADS)

    Asefi-Najafabady, S.; Rayner, P. J.; Gurney, K. R.; McRobert, A.; Song, Y.; Coltin, K.; Huang, J.; Elvidge, C.; Baugh, K.

    2014-09-01

    High-resolution, global quantification of fossil fuel CO2 emissions is emerging as a critical need in carbon cycle science and climate policy. We build upon a previously developed fossil fuel data assimilation system (FFDAS) for estimating global high-resolution fossil fuel CO2 emissions. We have improved the underlying observationally based data sources, expanded the approach through treatment of separate emitting sectors including a new pointwise database of global power plants, and extended the results to cover a 1997 to 2010 time series at a spatial resolution of 0.1°. Long-term trend analysis of the resulting global emissions shows subnational spatial structure in large active economies such as the United States, China, and India. These three countries, in particular, show different long-term trends and exploration of the trends in nighttime lights, and population reveal a decoupling of population and emissions at the subnational level. Analysis of shorter-term variations reveals the impact of the 2008-2009 global financial crisis with widespread negative emission anomalies across the U.S. and Europe. We have used a center of mass (CM) calculation as a compact metric to express the time evolution of spatial patterns in fossil fuel CO2 emissions. The global emission CM has moved toward the east and somewhat south between 1997 and 2010, driven by the increase in emissions in China and South Asia over this time period. Analysis at the level of individual countries reveals per capita CO2 emission migration in both Russia and India. The per capita emission CM holds potential as a way to succinctly analyze subnational shifts in carbon intensity over time. Uncertainties are generally lower than the previous version of FFDAS due mainly to an improved nightlight data set.

  11. A general protocol of ultra-high resolution MR angiography to image the cerebro-vasculature in 6 different rats strains at high field.

    PubMed

    Pastor, Géraldine; Jiménez-González, María; Plaza-García, Sandra; Beraza, Marta; Padro, Daniel; Ramos-Cabrer, Pedro; Reese, Torsten

    2017-09-01

    Differences in the cerebro-vasculature among strains as well as individual animals might explain variability in animal models and thus, a non-invasive method tailored to image cerebral vessel of interest with high signal to noise ratio is required. Experimentally, we describe a new general protocol of three-dimensional time-of-flight magnetic resonance angiography to visualize non-invasively the cerebral vasculature in 6 different rat strains. Flow compensated angiograms of Sprague Dawley, Wistar Kyoto, Lister Hooded, Long Evans, Fisher 344 and Spontaneous Hypertensive Rat strains were obtained without the use of contrast agents. At 11.7T using a repetition time of 60ms, an isotropic resolution of up to 62μm was achieved; total imaging time was 98min for a 3D data set. The visualization of the cerebral arteries was improved by removing extra-cranial vessels prior to the calculation of maximum intensity projection to obtain the angiograms. Ultimately, we demonstrate that the newly implemented method is also suitable to obtain angiograms following middle cerebral artery occlusion, despite the presence of intense vasogenic edema 24h after reperfusion. The careful selection of the excitation profile and repetition time at a higher static magnetic field allowed an increase in spatial resolution to reliably detect of the hypothalamic artery, the anterior choroidal artery as well as arterial branches of the peri-amygdoidal complex and the optical nerve in six different rat strains. MR angiography without contrast agent can be utilized to study cerebro-vascular abnormalities in various animal models. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glitzner, M; Lagendijk, J; Raaymakers, B

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axialmore » volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the resolution. This can be employed to greatly reduce image acquisition times for interventional applications in real-time. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less

  13. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands.

    PubMed

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J; Roberts, Dar A

    2017-04-15

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24m resolution WorldView-3 and a 0.1m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Coincidental Impact of Transcatheter Patent Foramen Ovale Closure on Migraine with and without Aura - A Comprehensive Meta-Analysis.

    PubMed

    Kanwar, Siddak M; Noheria, Amit; DeSimone, Christopher V; Rabinstein, Alejandro A; Asirvatham, Samuel J

    2016-03-01

    We analyzed the literature to assess the coincidental impact on migraines of transcatheter patent foramen ovale (PFO) closure performed for secondary stroke prevention. We searched Medline, EMBASE, and the Cochrane database for studies published up until August 2013. We included English-language studies that provided information on complete resolution or improvement in migraine headaches following PFO closure. Two study authors identified 375 original articles and both independently reviewed 32 relevant manuscripts. Data including study methodology, inclusion criteria, PFO closure and migraine outcomes were extracted manually from all eligible studies. Pooled odds (and probability) of resolution or improvement of migraine headaches were calculated using random-effects models. Twenty studies were analyzed. Most were uncontrolled studies that included a small number of patients with cryptogenic stroke who had undergone PFO closure and had variable time of followup. The probability of complete resolution of migraine with PFO closure (18 studies, 917 patients) was 0.46 (95% confidence interval 0.39, 0.53) and of any improvement in migraine (17 studies, 881 patients) was 0.78 (0.74, 0.82). There was evidence for publication bias in studies reporting on improvement in migraines (Begg's p=0.002), but not for studies on complete resolution of migraine (p=0.3). In patients with aura, the probability of complete resolution of migraine post-PFO closure was 0.54 (0.43, 0.65), and in those without aura, complete resolution occurred in 0.39 (0.29, 0.51). Among patients with unexplained stroke and migraine undergoing transcatheter PFO closure, resolution of headaches occurred in a majority of patients with aura and for a smaller proportion of patients without aura.

  15. Evaluating the impact of lower resolutions of digital elevation model on rainfall-runoff modeling for ungauged catchments.

    PubMed

    Ghumman, Abul Razzaq; Al-Salamah, Ibrahim Saleh; AlSaleem, Saleem Saleh; Haider, Husnain

    2017-02-01

    Geomorphological instantaneous unit hydrograph (GIUH) usually uses geomorphologic parameters of catchment estimated from digital elevation model (DEM) for rainfall-runoff modeling of ungauged watersheds with limited data. Higher resolutions (e.g., 5 or 10 m) of DEM play an important role in the accuracy of rainfall-runoff models; however, such resolutions are expansive to obtain and require much greater efforts and time for preparation of inputs. In this research, a modeling framework is developed to evaluate the impact of lower resolutions (i.e., 30 and 90 m) of DEM on the accuracy of Clark GIUH model. Observed rainfall-runoff data of a 202-km 2 catchment in a semiarid region was used to develop direct runoff hydrographs for nine rainfall events. Geographical information system was used to process both the DEMs. Model accuracy and errors were estimated by comparing the model results with the observed data. The study found (i) high model efficiencies greater than 90% for both the resolutions, and (ii) that the efficiency of Clark GIUH model does not significantly increase by enhancing the resolution of the DEM from 90 to 30 m. Thus, it is feasible to use lower resolutions (i.e., 90 m) of DEM in the estimation of peak runoff in ungauged catchments with relatively less efforts. Through sensitivity analysis (Monte Carlo simulations), the kinematic wave parameter and stream length ratio are found to be the most significant parameters in velocity and peak flow estimations, respectively; thus, they need to be carefully estimated for calculation of direct runoff in ungauged watersheds using Clark GIUH model.

  16. Mapping spatial patterns of stream power and channel change along a gravel-bed river in northern Yellowstone

    NASA Astrophysics Data System (ADS)

    Lea, Devin M.; Legleiter, Carl J.

    2016-01-01

    Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing or field surveys. This study sought to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8-km reach. Aerial photographs from 1994 to 2012 and ground-based surveys were used to develop a locational probability map and morphologic sediment budget to assess lateral channel mobility and changes in net sediment flux. A drainage area-to-discharge relationship and DEM developed from LiDAR data were used to obtain the discharge and slope values needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of sediment sources and sinks. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volumetric change in each time increment. Collectively, we refer to these methods as the stream power gradient (SPG) framework. The results of this study were compromised by methodological limitations of the SPG framework and revealed some complications likely to arise when applying this framework to small, wandering, gravel-bed rivers. Correlations between stream power gradients and sediment flux were generally weak, highlighting the inability of relatively simple statistical approaches to link sub-budget cell-scale sediment dynamics to larger-scale driving forces such as stream power gradients. Improving the moderate spatial resolution techniques used in this study and acquiring very-high resolution data from recently developed methods in fluvial remote sensing could help improve understanding of the spatial organization of stream power, sediment transport, and channel change in dynamic natural rivers.

  17. Mapping spatial patterns of stream power and channel change along a gravel-bed river in northern Yellowstone

    NASA Astrophysics Data System (ADS)

    Lea, Devin M.

    Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing or field surveys. This study used remote sensing and GIS tools along with field data to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8 km reach. Aerial photographs from 1994-2012 and cross-section surveys were used to develop a locational probability map and morphologic sediment budget to assess lateral channel mobility and changes in net sediment flux. A drainage area-to-discharge relationship and digital elevation model (DEM) developed from light detection and ranging (LiDAR) data were used to obtain the discharge and slope values needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of sediment sources and sinks. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volumetric change in each time increment. Results indicated a lack of strong correlation between stream power gradients and sediment response, highlighting the geomorphic complexity of Soda Butte Creek and the inability of relatively simple statistical approaches to link sub-budget cell-scale sediment dynamics to larger-scale driving forces such as stream power gradients. Improving the moderate spatial resolution techniques used in this study and acquiring very-high resolution data from recently developed methods in fluvial remote sensing could help improve understanding of the spatial organization of stream power, sediment transport, and channel change in dynamic natural rivers.

  18. TH-AB-209-08: Next Generation Dedicated 3D Breast Imaging with XACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Chen, J; Samant, P

    Purpose: Exposure to radiation increases the risk of cancer. We have designed a new imaging paradigm, X-ray induced acoustic computed tomography (XACT). Applying this innovative technology to breast imaging, an X-ray exposure can generate a 3D acoustic image, which dramatically reduces the radiation dose to patients when compared to conventional breast CT. Methods: Theoretical calculations are done to determine the appropriate X-ray energy and ultrasound frequency in breast XACT imaging. A series of breast CT image along the coronal plane from a patient with calcifications in the breast tissue are used as the source image. HU value based segmentation ismore » done to distinguish the skin, adipose tissue, glandular tissue, breast calcification, and chest bone from each CT image. X-ray dose deposition in each pixel is calculated based on the tissue type by using GEANT4 Monte Carlo toolkits. The initial pressure rise caused by X-ray energy deposition is calculated according to tissue properties. Then, the X-ray induced acoustic wave propagation is simulated by K-WAVE toolkit. Breast XACT images are reconstructed from the recorded time-dependent ultrasound waves. Results: For imaging a breast with large size (16cm in diameter at chest wall), the photon energy of X-ray source and the central frequency of ultrasound detector is determined as 20keV and 5.5MHz. Approximately 10 times contrast between a calcification and the breast tissue can be acquire from XACT image. The calcification can be clearly identified from the reconstructed XACT image. Conclusion: XACT technique takes the advantages of X-ray absorption contrast and high ultrasonic resolution. With the proposed innovative technology, one can potentially reduce radiation dose to patient in 3D breast imaging as compared with current x-ray modalities, while still maintaining high imaging contrast and spatial resolution.« less

  19. Meteorological modeling of arrival and deposition of fallout at intermediate distances downwind of the Nevada Test Site.

    PubMed

    Cederwall, R T; Peterson, K R

    1990-11-01

    A three-dimensional atmospheric transport and diffusion model is used to calculate the arrival and deposition of fallout from 13 selected nuclear tests at the Nevada Test Site (NTS) in the 1950s. Results are used to extend NTS fallout patterns to intermediate downwind distances (300 to 1200 km). The radioactive cloud is represented in the model by a population of Lagrangian marker particles, with concentrations calculated on an Eulerian grid. Use of marker particles, with fall velocities dependent on particle size, provides a realistic simulation of fallout as the debris cloud travels downwind. The three-dimensional wind field is derived from observed data, adjusted for mass consistency. Terrain is represented in the grid, which extends up to 1200 km downwind of NTS and has 32-km horizontal resolution and 1-km vertical resolution. Ground deposition is calculated by a deposition-velocity approach. Source terms and relationships between deposition and exposure rate are based on work by Hicks. Uncertainty in particle size and vertical distributions within the debris cloud (and stem) allow for some model "tuning" to better match measured ground-deposition values. Particle trajectories representing different sizes and starting heights above ground zero are used to guide source specification. An hourly time history of the modeled fallout pattern as the debris cloud moves downwind provides estimates of fallout arrival times. Results for event HARRY illustrate the methodology. The composite deposition pattern for all 13 tests is characterized by two lobes extending out to the north-northeast and east-northeast, respectively, at intermediate distances from NTS. Arrival estimates, along with modeled deposition values, augment measured deposition data in the development of data bases at the county level; these data bases are used for estimating radiation exposure at intermediate distances downwind of NTS. Results from a study of event TRINITY are also presented.

  20. Evaluation of resolution-precision relationships when using Structure-from-Motion to measure low intensity erosion processes, within a laboratory setting.

    NASA Astrophysics Data System (ADS)

    Benaud, Pia; Anderson, Karen; Quine, Timothy; James, Mike; Quinton, John; Brazier, Richard E.

    2017-04-01

    The accessibility of Structure-from-Motion Multi-Stereo View (SfM) and the potential for multi-temporal applications, offers an exciting opportunity to quantify soil erosion spatially. Accordingly, published research provides examples of the successful quantification of large erosion features and events, to centimetre accuracy. Through rigorous control of the camera and image network geometry, the centimetre accuracy achievable at the field scale, can translate to sub-millimetre accuracies within a laboratory environment. The broad aim of this study, therefore, was to understand how ultra-high-resolution spatial information on soil surface topography, derived from SfM, can be utilised to develop a spatially explicit, mechanistic understanding of rill and inter-rill erosion, under experimental conditions. A rainfall simulator was used to create three soil surface conditions; compaction and rainsplash erosion, inter-rill erosion, and rill erosion. Total sediment capture was the primary validation for the experiments, allowing the comparison between structurally and volumetrically derived change, and true soil loss. A Terrestrial Laser Scanner (resolution of ca. 0.8mm) was employed to assess spatial discrepancies within the SfM datasets and to provide an alternative measure of volumetric change. The body of work will present the workflow that has been developed for the laboratory-scale studies and provide information on the importance of DTM resolution for volumetric calculations of soil loss, under different soil surface conditions. To-date, using the methodology presented, point clouds with ca. 3.38 x 107 points per m2, and RMSE values of 0.17 to 0.43 mm (relative precision 1:2023-5117), were constructed. Preliminary results suggest a decrease in DTM resolution from 0.5 to 10 mm does not result in a significant change in volumetric calculations (p = 0.088), while affording a 24-fold decrease in processing times, but may impact negatively on mechanistic understanding of patterns of erosion. It is argued that the approach can be an invaluable tool for the spatially-explicit evaluation of soil erosion models.

  1. 47 CFR 27.1188 - Dispute resolution under the Cost-Sharing Plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Dispute resolution under the Cost-Sharing Plan. 27.1188 Section 27.1188 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... support to demonstrate that their calculation is reasonable and made in good faith. Specifically, these...

  2. 47 CFR 27.1188 - Dispute resolution under the Cost-Sharing Plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Dispute resolution under the Cost-Sharing Plan. 27.1188 Section 27.1188 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... support to demonstrate that their calculation is reasonable and made in good faith. Specifically, these...

  3. 47 CFR 27.1172 - Dispute Resolution Under the Cost-Sharing Plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Dispute Resolution Under the Cost-Sharing Plan. 27.1172 Section 27.1172 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... provide evidentiary support to demonstrate that their calculation is reasonable and made in good faith...

  4. 47 CFR 27.1172 - Dispute Resolution Under the Cost-Sharing Plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Dispute Resolution Under the Cost-Sharing Plan. 27.1172 Section 27.1172 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... provide evidentiary support to demonstrate that their calculation is reasonable and made in good faith...

  5. Relating speech production to tongue muscle compressions using tagged and high-resolution magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Ye, Chuyang; Woo, Jonghye; Stone, Maureen; Prince, Jerry

    2015-03-01

    The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue's motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles' activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject's tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients' muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.

  6. Evaluation of magnetic nanoparticle samples made from biocompatible ferucarbotran by time-correlation magnetic particle imaging reconstruction method

    PubMed Central

    2013-01-01

    Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917

  7. Small Rayed Crater Ejecta Retention Age Calculated from Current Crater Production Rates on Mars

    NASA Technical Reports Server (NTRS)

    Calef, F. J. III; Herrick, R. R.; Sharpton, V. L.

    2011-01-01

    Ejecta from impact craters, while extant, records erosive and depositional processes on their surfaces. Estimating ejecta retention age (Eret), the time span when ejecta remains recognizable around a crater, can be applied to estimate the timescale that surface processes operate on, thereby obtaining a history of geologic activity. However, the abundance of sub-kilometer diameter (D) craters identifiable in high resolution Mars imagery has led to questions of accuracy in absolute crater dating and hence ejecta retention ages (Eret). This research calculates the maximum Eret for small rayed impact craters (SRC) on Mars using estimates of the Martian impactor flux adjusted for meteorite ablation losses in the atmosphere. In addition, we utilize the diameter-distance relationship of secondary cratering to adjust crater counts in the vicinity of the large primary crater Zunil.

  8. Satellite Remote Sensing: Passive-Microwave Measurements of Sea Ice

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Zukor, Dorothy J. (Technical Monitor)

    2001-01-01

    Satellite passive-microwave measurements of sea ice have provided global or near-global sea ice data for most of the period since the launch of the Nimbus 5 satellite in December 1972, and have done so with horizontal resolutions on the order of 25-50 km and a frequency of every few days. These data have been used to calculate sea ice concentrations (percent areal coverages), sea ice extents, the length of the sea ice season, sea ice temperatures, and sea ice velocities, and to determine the timing of the seasonal onset of melt as well as aspects of the ice-type composition of the sea ice cover. In each case, the calculations are based on the microwave emission characteristics of sea ice and the important contrasts between the microwave emissions of sea ice and those of the surrounding liquid-water medium.

  9. The effect of spatial resolution on water scarcity estimates in Australia

    NASA Astrophysics Data System (ADS)

    Gevaert, Anouk; Veldkamp, Ted; van Dijk, Albert; Ward, Philip

    2017-04-01

    Water scarcity is an important global issue with severe socio-economic consequences, and its occurrence is likely to increase in many regions due to population growth, economic development and climate change. This has prompted a number of global and regional studies to identify areas that are vulnerable to water scarcity and to determine how this vulnerability will change in the future. A drawback of these studies, however, is that they typically have coarse spatial resolutions. Here, we studied the effect of increasing the spatial resolution of water scarcity estimates in Australia, and the Murray-Darling Basin in particular. This was achieved by calculating the water stress index (WSI), an indicator showing the ratio of water use to water availability, at 0.5 and 0.05 degree resolution for the period 1990-2010. Monthly water availability data were based on outputs of the Australian Water Resources Assessment Landscape model (AWRA-L), which was run at both spatial resolutions and at a daily time scale. Water use information was obtained from a monthly 0.5 degree global dataset that distinguishes between water consumption for irrigation, livestock, industrial and domestic uses. The data were downscaled to 0.05 degree by dividing the sectoral water uses over the areas covered by relevant land use types using a high resolution ( 0.5km) land use dataset. The monthly WSIs at high and low resolution were then used to evaluate differences in the patterns of water scarcity frequency and intensity. In this way, we assess to what extent increasing the spatial resolution can improve the identification of vulnerable areas and thereby assist in the development of strategies to lower this vulnerability. The results of this study provide insight into the scalability of water scarcity estimates and the added value of high resolution water scarcity information in water resources management.

  10. The Thermal Regulation of Gravitational Instabilities in Protoplanetary Disks. II. Extended Simulations with Varied Cooling Rates

    NASA Astrophysics Data System (ADS)

    Mejía, Annie C.; Durisen, Richard H.; Pickett, Megan K.; Cai, Kai

    2005-02-01

    In order to investigate mass transport and planet formation through gravitational instabilities (GIs), we have extended our three-dimensional hydrodynamic simulations of protoplanetary disks from a previous paper. Our goal is to determine the asymptotic behavior of GIs and how it is affected by different constant cooling times. Initially, Rdisk=40 AU, Mdisk=0.07 Msolar, M*=0.5 Msolar, and Qmin=1.5. Sustained cooling, with tcool=2 ORPs (outer rotation periods; 1ORP~250 yr), drives the disk to instability in about 4 ORPs. This calculation is followed for 23.5 ORPs. After 12 ORPs, the disk settles into a quasi-steady state with sustained nonlinear instabilities, an average Q=1.44 over the outer disk, a well-defined power law Σ(r), and a roughly steady M~5×10-7 Msolar yr-1. The transport is driven by global low-order spiral modes. We restart the calculation at 11.2 ORPs with tcool=1 and 1/4 ORPs. The latter case is also run at high azimuthal resolution. We find that shorter cooling times lead to increased M-values, denser and thinner spiral structures, and more violent dynamic behavior. The asymptotic total internal energy and the azimuthally averaged Q(r) are insensitive to tcool. Fragmentation occurs only in the high-resolution tcool=1/4 ORP case; however, none of the fragments survive for even a quarter of an orbit. Ringlike density enhancements appear and grow near the boundary between GI-active and GI-inactive regions. We discuss the possible implications of these rings for gas giant planet formation.

  11. Photophysics of phenol and pentafluorophenol: The role of nonadiabaticity in the optical transition to the lowest bright 1ππ* state

    NASA Astrophysics Data System (ADS)

    Rajak, Karunamoy; Ghosh, Arpita; Mahapatra, S.

    2018-02-01

    We report multimode vibronic coupling of the energetically low-lying electronic states of phenol and pentafluorophenol in this article. First principles nuclear dynamics calculations are carried out to elucidate the optical absorption spectrum of both of the molecules. This is motivated by the recent experimental measurements [S. Karmakar et al., J. Chem. Phys. 142, 184303 (2015)] on these systems. Diabatic vibronic coupling models are developed with the aid of adiabatic electronic energies calculated ab initio by the equation of motion coupled cluster quantum chemistry method. A nuclear dynamics study on the constructed electronic states is carried out by both the time-independent and time-dependent quantum mechanical methods. It is found that the nature of low-energy πσ* transition changes, and in pentafluorophenol the energy of the first two 1πσ* states, is lowered by about half an eV (vertically, relative to those in phenol), and they become energetically close to the optically bright first excited 1ππ* (S1) state. This results in strong vibronic coupling and multiple multi-state conical intersections among the ππ* and πσ* electronic states of pentafluorophenol. The impact of associated nonadiabatic effects on the vibronic structure and dynamics of the 1ππ* state is examined at length. The structured vibronic band of phenol becomes structureless in pentafluorophenol. The theoretical results are found to be in good accord with the experimental finding at both high energy resolution and low energy resolution.

  12. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    NASA Technical Reports Server (NTRS)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  13. Resolution study of imaging in nanoparticle optical phantoms

    NASA Astrophysics Data System (ADS)

    Ortiz-Rascón, E.; Bruce, N. C.; Flores-Flores, J. O.; Sato-Berru, R.

    2011-08-01

    We present results of resolution and optical characterization studies of silicon dioxide nanoparticle solutions. These phantoms consist of spherical particles with a mean controlled diameter of 168 and 429 nm. The importance of this work lies in using these solutions to develop phantoms with optical properties that closely match those of human breast tissue at near-IR wavelengths, and also to compare different resolution criteria for imaging studies at these wavelengths. Characterization involves illuminating the solution with a laser beam transmitted through a recipient of known width containing the solution. Resulting intensity profiles from the light spot are measured as function of the detector position. Measured intensity profiles were fitted to the calculated profiles obtained from diffusion theory, using the method of images. Fitting results give us the absorption and transport scattering coefficients. These coefficients can be modified by changing the particle concentration of the solution. We found that these coefficients are the same order of magnitude as those of human tissue reported in published studies. The resolution study involves measuring the edge response function (ERF) for a mask embedded on the nanoparticle solutions and fitting it to the calculated ERF, obtaining the resolution for the Hebden, Sparrow and Bentzen criteria.

  14. Determination of the resolution of the x-ray microscope XM-1 at beamline 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heck, J.M.; Meyer-Ilse, W.; Attwood, D.T.

    1997-04-01

    Resolution determination in x-ray microscopy is a complex issue which depends on many factors. Many different criteria and experimental setups are used to characterize resolution. Some of the important factors affecting resolution include the partial coherence and spectrum of the illumination. The purpose of this research has been to measure the resolution of XM-1 at beamline 6.1 taking into account these factors, and to compare the measurements to theoretical calculations. The x-ray microscope XM-1, built by the Center for X-ray Optics (CXRO), has been operational since 1994 at the Advanced Light Source at E.O. Lawrence Berkeley National Laboratory. It ismore » of the conventional (i.e. full-field) type, utilizing zone plate optics. ALS bending magnet radiation is focused by a condenser zone plate onto a monochromator pinhole immediately in front of the sample. X-rays transmitted through the sample are focused by a micro-zone plate onto a CCD camera. The pinhole and the condenser with a central stop constitute a linear monochromator. The spectral distribution of the light illuminating the sample has been calculated assuming geometrical optics.« less

  15. Retinal Structure of Birds of Prey Revealed by Ultra-High Resolution Spectral-Domain Optical Coherence Tomography

    PubMed Central

    Ruggeri, Marco; Major, James C.; McKeown, Craig; Knighton, Robert W.; Puliafito, Carmen A.

    2010-01-01

    Purpose. To reveal three-dimensional (3-D) information about the retinal structures of birds of prey in vivo. Methods. An ultra-high resolution spectral-domain optical coherence tomography (SD-OCT) system was built for in vivo imaging of retinas of birds of prey. The calibrated imaging depth and axial resolution of the system were 3.1 mm and 2.8 μm (in tissue), respectively. 3-D segmentation was performed for calculation of the retinal nerve fiber layer (RNFL) map. Results. High-resolution OCT images were obtained of the retinas of four species of birds of prey: two diurnal hawks (Buteo platypterus and Buteo brachyurus) and two nocturnal owls (Bubo virginianus and Strix varia). These images showed the detailed retinal anatomy, including the retinal layers and the structure of the deep and shallow foveae. The calculated thickness map showed the RNFL distribution. Traumatic injury to one bird's retina was also successfully imaged. Conclusions. Ultra-high resolution SD-OCT provides unprecedented high-quality 2-D and 3-D in vivo visualization of the retinal structures of birds of prey. SD-OCT is a powerful imaging tool for vision research in birds of prey. PMID:20554605

  16. Fault Specific Seismic Hazard Maps as Input to Loss Reserves Calculation for Attica Buildings

    NASA Astrophysics Data System (ADS)

    Deligiannakis, Georgios; Papanikolaou, Ioannis; Zimbidis, Alexandros; Roberts, Gerald

    2014-05-01

    Greece is prone to various natural disasters, such as wildfires, floods, landslides and earthquakes, due to the special environmental and geological conditions dominating in tectonic plate boundaries. Seismic is the predominant risk, in terms of damages and casualties in the Greek territory. The historical record of earthquakes in Greece has been published from various researchers, providing useful data in seismic hazard assessment of Greece. However, the completeness of the historical record in Greece, despite being one of the longest worldwide, reaches only 500 years for M ≥ 7.3 and less than 200 years for M ≥ 6.5. Considering that active faults in the area have recurrence intervals of a few hundred to several thousands of years, it is clear that many active faults have not been activated during the completeness period covered by the historical records. New Seismic Hazard Assessment methodologies tend to follow fault specific approaches where seismic sources are geologically constrained active faults, in order to address problems related to the historical records incompleteness, obtain higher spatial resolution and calculate realistic source locality distances, since seismic sources are very accurately located. Fault specific approaches provide quantitative assessments as they measure fault slip rates from geological data, providing a more reliable estimate of seismic hazard. We used a fault specific seismic hazard assessment approach for the region of Attica. The method of seismic hazard mapping from geological fault throw-rate data combined three major factors: Empirical data which combine fault rupture lengths, earthquake magnitudes and coseismic slip relationships. The radiuses of VI, VII, VIII and IX isoseismals on the Modified Mercalli (MM) intensity scale. Attenuation - amplification functions for seismic shaking on bedrock compared to basin filling sediments. We explicitly modeled 22 active faults that could affect the region of Attica, including Athens, using detailed data derived from published papers, neotectonic maps and fieldwork observations. Moreover, we incorporated background seismicity models from the historic record and also the subduction zone earthquakes distribution, for the integration of strong deep earthquakes that could also affect Attica region. We created 4 high spatial resolution seismic hazard maps for the region of Attica, one for each of the intensities VII - X (MM). These maps offer a locality specific shaking recurrence record, which represents the long-term shaking record in a more complete way, since they incorporate several seismic cycles of the active faults that could affect Attica. Each one of these high resolution seismic hazard maps displays both the spatial distribution and the recurrence, over a specific time period, of the relevant intensity. Time - independent probabilities were extracted based on these average recurrence intervals, using the stationary Poisson model P = 1 -e-Λt. The 'Λ' value was provided by the intensities recurrence, as displayed in the seismic hazard maps. However, the insurance contracts usually lack of detailed spatial information and they refer to Postal Codes level, akin to CRESTA zones. To this end, a time-independent probability of shaking at intensities VII - X was calculated for every Postal Code, for a given time period, using the Poisson model. The reserves calculation on buildings portfolio combines the probability of events of specific intensities within the Postal Codes, with the buildings characteristics, such as the building construction type and the insured value. We propose a standard approach for the reserves calculation K(t) for a specific time period: K (t) = x2 ·[x1 ·y1 ·P1(t) + x1 ·y2 ·P2(t) + x1 ·y3 ·P3(t) + x1 ·y4 ·P4(t)] x1 which is a function of the probabilities of occurrence for the seismic intensities VII - X (P1(t) -P4(t)) for the same period, the value of the building x1, the insured value x2 and the characteristics of the building, such as the construction type, age, height and use of property (y1 - y4). Furthermore a stochastic approach is also adopted in order to obtain the relevant reserve value K(t) for the specific time period. This calculation considers a set of simulations from the Poisson random variable and then taking the respective expectations.

  17. Accelerating the coupled-cluster singles and doubles method using the chain-of-sphere approximation

    NASA Astrophysics Data System (ADS)

    Dutta, Achintya Kumar; Neese, Frank; Izsák, Róbert

    2018-06-01

    In this paper, we present a chain-of-sphere implementation of the external exchange term, the computational bottleneck of coupled-cluster calculations at the singles and doubles level. This implementation is compared to standard molecular orbital, atomic orbital and resolution of identity implementations of the same term within the ORCA package and turns out to be the most efficient one for larger molecules, with a better accuracy than the resolution-of-identity approximation. Furthermore, it becomes possible to perform a canonical CC calculation on a tetramer of nucleobases in 17 days, 20 hours.

  18. Spacecraft Charging Calculations: NASCAP-2K and SEE Spacecraft Charging Handbook

    NASA Technical Reports Server (NTRS)

    Davis, V. A.; Neergaard, L. F.; Mandell, M. J.; Katz, I.; Gardner, B. M.; Hilton, J. M.; Minor, J.

    2002-01-01

    For fifteen years NASA and the Air Force Charging Analyzer Program for Geosynchronous Orbits (NASCAP/GEO) has been the workhorse of spacecraft charging calculations. Two new tools, the Space Environment and Effects (SEE) Spacecraft Charging Handbook (recently released), and Nascap-2K (under development), use improved numeric techniques and modern user interfaces to tackle the same problem. The SEE Spacecraft Charging Handbook provides first-order, lower-resolution solutions while Nascap-2K provides higher resolution results appropriate for detailed analysis. This paper illustrates how the improvements in the numeric techniques affect the results.

  19. On modeling the paleohydrologic response of closed-basin lakes to fluctuations in climate: Methods, applications, and implications

    NASA Astrophysics Data System (ADS)

    Liu, Ganming; Schwartz, Franklin W.

    2014-04-01

    Climate reconstructions using tree rings and lake sediments have contributed significantly to the understanding of Holocene climates. Approaches focused specifically on reconstructing the temporal water-level response of lakes, however, are much less developed. This paper describes a statistical correlation approach based on time series with Palmer Drought Severity Index (PDSI) values derived from instrumental records or tree rings as a basis for reconstructing stage hydrographs for closed-basin lakes. We use a distributed lag correlation model to calculate a variable, ωt that represents the water level of a lake at any time t as a result of integrated climatic forcing from preceding years. The method was validated using both synthetic and measured lake-stage data and the study found that a lake's "memory" of climate fades as time passes, following an exponential-decay function at rates determined by the correlation time lag. Calculated trends in ωt for Moon Lake, Rice Lake, and Lake Mina from A.D. 1401 to 1860 compared well with the established chronologies (salinity, moisture, and Mg/Ca ratios) reconstructed from sediments. This method provides an independent approach for developing high-resolution information on lake behaviors in preinstrumental times and has been able to identify problems of climate signal deterioration in sediment-based climate reconstructions in lakes with a long time lag.

  20. Time Variations of Observed H α Line Profiles and Precipitation Depths of Nonthermal Electrons in a Solar Flare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falewicz, Robert; Radziszewski, Krzysztof; Rudawy, Paweł

    2017-10-01

    We compare time variations of the H α and X-ray emissions observed during the pre-impulsive and impulsive phases of the C1.1-class solar flare on 2013 June 21 with those of plasma parameters and synthesized X-ray emission from a 1D hydrodynamic numerical model of the flare. The numerical model was calculated assuming that the external energy is delivered to the flaring loop by nonthermal electrons (NTEs). The H α spectra and images were obtained using the Multi-channel Subtractive Double Pass spectrograph with a time resolution of 50 ms. The X-ray fluxes and spectra were recorded by RHESSI . Pre-flare geometric andmore » thermodynamic parameters of the model and the delivered energy were estimated using RHESSI data. The time variations of the X-ray light curves in various energy bands and those of the H α intensities and line profiles were well correlated. The timescales of the observed variations agree with the calculated variations of the plasma parameters in the flaring loop footpoints, reflecting the time variations of the vertical extent of the energy deposition layer. Our result shows that the fast time variations of the H α emission of the flaring kernels can be explained by momentary changes of the deposited energy flux and the variations of the penetration depths of the NTEs.« less

Top