Sample records for spectral analysis code

  1. Performance enhancement of optical code-division multiple-access systems using transposed modified Walsh code

    NASA Astrophysics Data System (ADS)

    Sikder, Somali; Ghosh, Shila

    2018-02-01

    This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.

  2. Analysis on the optical aberration effect on spectral resolution of coded aperture spectroscopy

    NASA Astrophysics Data System (ADS)

    Hao, Peng; Chi, Mingbo; Wu, Yihui

    2017-10-01

    The coded aperture spectrometer can achieve high throughput and high spectral resolution by replacing the traditional single slit with two-dimensional array slits manufactured by MEMS technology. However, the sampling accuracy of coding spectrum image will be distorted due to the existence of system aberrations, machining error, fixing errors and so on, resulting in the declined spectral resolution. The influence factor of the spectral resolution come from the decode error, the spectral resolution of each column, and the column spectrum offset correction. For the Czerny-Turner spectrometer, the spectral resolution of each column most depend on the astigmatism, in this coded aperture spectroscopy, the uncorrected astigmatism does result in degraded performance. Some methods must be used to reduce or remove the limiting astigmatism. The curvature of field and the spectral curvature can be result in the spectrum revision errors.

  3. Spectral analysis of variable-length coded digital signals

    NASA Astrophysics Data System (ADS)

    Cariolaro, G. L.; Pierobon, G. L.; Pupolin, S. G.

    1982-05-01

    A spectral analysis is conducted for a variable-length word sequence by an encoder driven by a stationary memoryless source. A finite-state sequential machine is considered as a model of the line encoder, and the spectral analysis of the encoded message is performed under the assumption that the sourceword sequence is composed of independent identically distributed words. Closed form expressions for both the continuous and discrete parts of the spectral density are derived in terms of the encoder law and sourceword statistics. The jump part exhibits jumps at multiple integers of per lambda(sub 0)T, where lambda(sub 0) is the greatest common divisor of the possible codeword lengths, and T is the symbol period. The derivation of the continuous part can be conveniently factorized, and the theory is applied to the spectral analysis of BnZS and HDBn codes.

  4. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    NASA Astrophysics Data System (ADS)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-06-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  5. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  6. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    NASA Astrophysics Data System (ADS)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  7. Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen

    NASA Technical Reports Server (NTRS)

    Blackwell, H. E.

    1991-01-01

    An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.

  8. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  9. Signal processing of aircraft flyover noise

    NASA Technical Reports Server (NTRS)

    Kelly, Jeffrey J.

    1991-01-01

    A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a uniform level flyover is considered but the code can accept more general flight profiles. The effects of spectral smearing and its removal is discussed. Using data acquired from XV-15 tilt rotor flyover test comparisons are made showing the measured and corrected spectra. Frequency shifts are accurately accounted for by the method. It is shown that correcting for spherical spreading, Doppler amplitude, and frequency can give some idea about source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.

  10. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  11. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  12. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Albanese, K; Lakshmanan, M

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality formore » breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded aperture coherent scatter imaging a clinically viable method for breast cancer detection. United States Department of Homeland Security; Duke University Medical Center - Department of Radiology; Carl E Ravin Advanced Imaging Laboratories; Duke University Medical Physics Graduate Program.« less

  13. Processing Raman Spectra of High-Pressure Hydrogen Flames

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2006-01-01

    The Raman Code automates the analysis of laser-Raman-spectroscopy data for diagnosis of combustion at high pressure. On the basis of the theory of molecular spectroscopy, the software calculates the rovibrational and pure rotational Raman spectra of H2, O2, N2, and H2O in hydrogen/air flames at given temperatures and pressures. Given a set of Raman spectral data from measurements on a given flame and results from the aforementioned calculations, the software calculates the thermodynamic temperature and number densities of the aforementioned species. The software accounts for collisional spectral-line-broadening effects at pressures up to 60 bar (6 MPa). The line-broadening effects increase with pressure and thereby complicate the analysis. The software also corrects for spectral interference ("cross-talk") among the various chemical species. In the absence of such correction, the cross-talk is a significant source of error in temperatures and number densities. This is the first known comprehensive computer code that, when used in conjunction with a spectral calibration database, can process Raman-scattering spectral data from high-pressure hydrogen/air flames to obtain temperatures accurate to within 10 K and chemical-species number densities accurate to within 2 percent.

  14. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  15. Novel methodologies for spectral classification of exon and intron sequences

    NASA Astrophysics Data System (ADS)

    Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.

    2012-12-01

    Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.

  16. Terrestrial solar spectral modeling. [SOLTRAN, BRITE, and FLASH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bird, R.E.

    The utility of accurate computer codes for calculating the solar spectral irradiance under various atmospheric conditions was recognized. New absorption and extraterrestrial spectral data are introduced. Progress is made in radiative transfer modeling outside of the solar community, especially for space and military applications. Three rigorous radiative transfer codes SOLTRAN, BRITE, and FLASH are employed. The SOLTRAN and BRITE codes are described and results from their use are presented.

  17. Modeling experimental plasma diagnostics in the FLASH code: Thomson scattering

    NASA Astrophysics Data System (ADS)

    Weide, Klaus; Flocke, Norbert; Feister, Scott; Tzeferacos, Petros; Lamb, Donald

    2017-10-01

    Spectral analysis of the Thomson scattering of laser light sent into a plasma provides an experimental method to quantify plasma properties in laser-driven plasma experiments. We have implemented such a synthetic Thomson scattering diagnostic unit in the FLASH code, to emulate the probe-laser propagation, scattering and spectral detection. User-defined laser rays propagate into the FLASH simulation region and experience scattering (change in direction and frequency) based on plasma parameters. After scattering, the rays propagate out of the interaction region and are spectrally characterized. The diagnostic unit can be used either during a physics simulation or in post-processing of simulation results. FLASH is publicly available at flash.uchicago.edu. U.S. DOE NNSA, U.S. DOE NNSA ASC, U.S. DOE Office of Science and NSF.

  18. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  19. FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code

    NASA Astrophysics Data System (ADS)

    Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya

    2017-12-01

    We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1

  20. Gauge invariant spectral Cauchy characteristic extraction

    NASA Astrophysics Data System (ADS)

    Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey

    2015-12-01

    We present gauge invariant spectral Cauchy characteristic extraction. We compare gravitational waveforms extracted from a head-on black hole merger simulated in two different gauges by two different codes. We show rapid convergence, demonstrating both gauge invariance of the extraction algorithm and consistency between the legacy Pitt null code and the much faster spectral Einstein code (SpEC).

  1. Long-range correlation properties of coding and noncoding DNA sequences: GenBank analysis.

    PubMed

    Buldyrev, S V; Goldberger, A L; Havlin, S; Mantegna, R N; Matsa, M E; Peng, C K; Simons, M; Stanley, H E

    1995-05-01

    An open question in computational molecular biology is whether long-range correlations are present in both coding and noncoding DNA or only in the latter. To answer this question, we consider all 33301 coding and all 29453 noncoding eukaryotic sequences--each of length larger than 512 base pairs (bp)--in the present release of the GenBank to dtermine whether there is any statistically significant distinction in their long-range correlation properties. Standard fast Fourier transform (FFT) analysis indicates that coding sequences have practically no correlations in the range from 10 bp to 100 bp (spectral exponent beta=0.00 +/- 0.04, where the uncertainty is two standard deviations). In contrast, for noncoding sequences, the average value of the spectral exponent beta is positive (0.16 +/- 0.05) which unambiguously shows the presence of long-range correlations. We also separately analyze the 874 coding and the 1157 noncoding sequences that have more than 4096 bp and find a larger region of power-law behavior. We calculate the probability that these two data sets (coding and noncoding) were drawn from the same distribution and we find that it is less than 10(-10). We obtain independent confirmation of these findings using the method of detrended fluctuation analysis (DFA), which is designed to treat sequences with statistical heterogeneity, such as DNA's known mosaic structure ("patchiness") arising from the nonstationarity of nucleotide concentration. The near-perfect agreement between the two independent analysis methods, FFT and DFA, increases the confidence in the reliability of our conclusion.

  2. Long-range correlation properties of coding and noncoding DNA sequences: GenBank analysis

    NASA Technical Reports Server (NTRS)

    Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Matsa, M. E.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    An open question in computational molecular biology is whether long-range correlations are present in both coding and noncoding DNA or only in the latter. To answer this question, we consider all 33301 coding and all 29453 noncoding eukaryotic sequences--each of length larger than 512 base pairs (bp)--in the present release of the GenBank to dtermine whether there is any statistically significant distinction in their long-range correlation properties. Standard fast Fourier transform (FFT) analysis indicates that coding sequences have practically no correlations in the range from 10 bp to 100 bp (spectral exponent beta=0.00 +/- 0.04, where the uncertainty is two standard deviations). In contrast, for noncoding sequences, the average value of the spectral exponent beta is positive (0.16 +/- 0.05) which unambiguously shows the presence of long-range correlations. We also separately analyze the 874 coding and the 1157 noncoding sequences that have more than 4096 bp and find a larger region of power-law behavior. We calculate the probability that these two data sets (coding and noncoding) were drawn from the same distribution and we find that it is less than 10(-10). We obtain independent confirmation of these findings using the method of detrended fluctuation analysis (DFA), which is designed to treat sequences with statistical heterogeneity, such as DNA's known mosaic structure ("patchiness") arising from the nonstationarity of nucleotide concentration. The near-perfect agreement between the two independent analysis methods, FFT and DFA, increases the confidence in the reliability of our conclusion.

  3. Replacing effective spectral radiance by temperature in occupational exposure limits to protect against retinal thermal injury from light and near IR radiation.

    PubMed

    Madjidi, Faramarz; Behroozy, Ali

    2014-01-01

    Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.

  4. Smooth Upgrade of Existing Passive Optical Networks With Spectral-Shaping Line-Coding Service Overlay

    NASA Astrophysics Data System (ADS)

    Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.

    2005-09-01

    A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.

  5. Estimations of Mo X-pinch plasma parameters on QiangGuang-1 facility by L-shell spectral analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jian; Qiu, Aici; State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024

    2013-08-15

    Plasma parameters of molybdenum (Mo) X-pinches on the 1-MA QiangGuang-1 facility were estimated by L-shell spectral analysis. X-ray radiation from X-pinches had a pulsed width of 1 ns, and its spectra in 2–3 keV were measured with a time-integrated X-ray spectrometer. Relative intensities of spectral features were derived by correcting for the spectral sensitivity of the spectrometer. With an open source, atomic code FAC (flexible atomic code), ion structures, and various atomic radiative-collisional rates for O-, F-, Ne-, Na-, Mg-, and Al-like ionization stages were calculated, and synthetic spectra were constructed at given plasma parameters. By fitting the measured spectramore » with the modeled, Mo X-pinch plasmas on the QiangGuang-1 facility had an electron density of about 10{sup 21} cm{sup −3} and the electron temperature of about 1.2 keV.« less

  6. Rocket-Plume Spectroscopy Simulation for Hydrocarbon-Fueled Rocket Engines

    NASA Technical Reports Server (NTRS)

    Tejwani, Gopal D.

    2010-01-01

    The UV-Vis spectroscopic system for plume diagnostics monitors rocket engine health by using several analytical tools developed at Stennis Space Center (SSC), including the rocket plume spectroscopy simulation code (RPSSC), to identify and quantify the alloys from the metallic elements observed in engine plumes. Because the hydrocarbon-fueled rocket engine is likely to contain C2, CO, CH, CN, and NO in addition to OH and H2O, the relevant electronic bands of these molecules in the spectral range of 300 to 850 nm in the RPSSC have been included. SSC incorporated several enhancements and modifications to the original line-by-line spectral simulation computer program implemented for plume spectral data analysis and quantification in 1994. These changes made the program applicable to the Space Shuttle Main Engine (SSME) and the Diagnostic Testbed Facility Thruster (DTFT) exhaust plume spectral data. Modifications included updating the molecular and spectral parameters for OH, adding spectral parameter input files optimized for the 10 elements of interest in the spectral range from 320 to 430 nm and linking the output to graphing and analysis packages. Additionally, the ability to handle the non-uniform wavelength interval at which the spectral computations are made was added. This allowed a precise superposition of wavelengths at which the spectral measurements have been made with the wavelengths at which the spectral computations are done by using the line-by-line (LBL) code. To account for hydrocarbon combustion products in the plume, which might interfere with detection and quantification of metallic elements in the spectral region of 300 to 850 nm, the spectroscopic code has been enhanced to include the carbon-based combustion species of C2, CO, and CH. In addition, CN and NO have spectral bands in 300 to 850 nm and, while these molecules are not direct products of hydrocarbon-oxygen combustion systems, they can show up if nitrogen or a nitrogen compound is present as an impurity in the propellants and/or these can form in the boundary layer as a result of interaction of the hot plume with the atmosphere during the ground testing of engines. Ten additional electronic band systems of these five molecules have been included into the code. A comprehensive literature search was conducted to obtain the most accurate values for the molecular and the spectral parameters, including Franck-Cordon factors and electronic transition moments for all ten band systems. For each elemental transition in the RPSSC, six spectral parameters - Doppler broadened line width at half-height, pressure-broadened line width at half-height, electronic multiplicity of the upper state, electronic term energy of the upper state, Einstein transition probability coefficient, and the atomic line center - are required. Input files have been created for ten elements of Ni, Fe, Cr, Co, Cu, Ca, Mn, Al, Ag, and Pd, which retain only relatively moderate to strong transitions in 300 to 430 nm spectral range for each element. The number of transitions in the input files is 68 for Ni; 148 for Fe; 6 for Cr; 87 for Co; 1 for Ca; 3 for Mn; 2 each for Cu, Al, and Ag; and 11 for Pd.

  7. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  8. Analysis of stimulated Raman backscatter and stimulated Brillouin backscatter in experiments performed on SG-III prototype facility with a spectral analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Liang; Zhao, Yiqing; Hu, Xiaoyan

    2014-07-15

    Experiments about the observations of stimulated Raman backscatter (SRS) and stimulated Brillouin backscatter (SBS) in Hohlraum were performed on Shenguang-III (SG-III) prototype facility for the first time in 2011. In this paper, relevant experimental results are analyzed for the first time with a one-dimension spectral analysis code, which is developed to study the coexistent process of SRS and SBS in Hohlraum plasma condition. Spectral features of the backscattered light are discussed with different plasma parameters. In the case of empty Hohlraum experiments, simulation results indicate that SBS, which grows fast at the energy deposition region near the Hohlraum wall, ismore » the dominant instability process. The time resolved spectra of SRS and SBS are numerically obtained, which agree with the experimental observations. For the gas-filled Hohlraum experiments, simulation results show that SBS grows fastest in Au plasma and amplifies convectively in C{sub 5}H{sub 12} gas, whereas SRS mainly grows in the high density region of the C{sub 5}H{sub 12} gas. Gain spectra and the spectra of backscattered light are simulated along the ray path, which clearly show the location where the intensity of scattered light with a certain wavelength increases. This work is helpful to comprehend the observed spectral features of SRS and SBS. The experiments and relevant analysis provide references for the ignition target design in future.« less

  9. A CLOUDY/XSPEC Interface

    NASA Technical Reports Server (NTRS)

    Porter, R. L.; Ferland, G. J.; Kraemer, S. B.; Armentrout, B. K.; Arnaud, K. A.; Turner, T. J.

    2007-01-01

    We discuss new functionality of the spectral simulation code CLOUDY which allows the user to calculate grids with one or more initial parameters varied and formats the predicted spectra in the standard FITS format. These files can then be imported into the x-ray spectral analysis software XSPEC and used as theoretical models for observations. We present and verify a test case. Finally, we consider a few observations and discuss our results.

  10. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  11. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  12. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  13. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  14. Methyl Group Internal Rotation in the Pure Rotational Spectrum of 1,1-DIFLUOROACETONE

    NASA Astrophysics Data System (ADS)

    Grubbs, G. S. Grubbs, II; Cooke, S. A.; Groner, P.

    2011-06-01

    We have used chirped pulse Fourier transform microwave spectroscopy to record the pure rotational spectrum of the title molecule. The spectrum was doubled owing to the internal rotation of the methyl group. The spectrum has been assigned and two approaches to the spectral analysis have been performed. In the first case, the A and E components were fit separately using a principal axis method with the SPFIT code of Pickett. In the second case, the A and E states were fit simultaneously using the ERHAM code. For a satisfactory analysis of the spectral data it has been found that the choice of Hamiltonian reduction, i.e. Watson A or S, is very important. The barrier to the internal rotation has been determined to be 261.1(8) Cm-1 and it will be compared to that of acetone and other halogenated acetone species recently studied in our laboratory.

  15. Entracking as a Brain Stem Code for Pitch: The Butte Hypothesis.

    PubMed

    Joris, Philip X

    2016-01-01

    The basic nature of pitch is much debated. A robust code for pitch exists in the auditory nerve in the form of an across-fiber pooled interspike interval (ISI) distribution, which resembles the stimulus autocorrelation. An unsolved question is how this representation can be "read out" by the brain. A new view is proposed in which a known brain-stem property plays a key role in the coding of periodicity, which I refer to as "entracking", a contraction of "entrained phase-locking". It is proposed that a scalar rather than vector code of periodicity exists by virtue of coincidence detectors that code the dominant ISI directly into spike rate through entracking. Perfect entracking means that a neuron fires one spike per stimulus-waveform repetition period, so that firing rate equals the repetition frequency. Key properties are invariance with SPL and generalization across stimuli. The main limitation in this code is the upper limit of firing (~ 500 Hz). It is proposed that entracking provides a periodicity tag which is superimposed on a tonotopic analysis: at low SPLs and fundamental frequencies > 500 Hz, a spectral or place mechanism codes for pitch. With increasing SPL the place code degrades but entracking improves and first occurs in neurons with low thresholds for the spectral components present. The prediction is that populations of entracking neurons, extended across characteristic frequency, form plateaus ("buttes") of firing rate tied to periodicity.

  16. Easily extensible unix software for spectral analysis, display, modification, and synthesis of musical sounds

    NASA Astrophysics Data System (ADS)

    Beauchamp, James W.

    2002-11-01

    Software has been developed which enables users to perform time-varying spectral analysis of individual musical tones or successions of them and to perform further processing of the data. The package, called sndan, is freely available in source code, uses EPS graphics for display, and is written in ansi c for ease of code modification and extension. Two analyzers, a fixed-filter-bank phase vocoder (''pvan'') and a frequency-tracking analyzer (''mqan'') constitute the analysis front end of the package. While pvan's output consists of continuous amplitudes and frequencies of harmonics, mqan produces disjoint ''tracks.'' However, another program extracts a fundamental frequency and separates harmonics from the tracks, resulting in a continuous harmonic output. ''monan'' is a program used to display harmonic data in a variety of formats, perform various spectral modifications, and perform additive resynthesis of the harmonic partials, including possible pitch-shifting and time-scaling. Sounds can also be synthesized according to a musical score using a companion synthesis language, Music 4C. Several other programs in the sndan suite can be used for specialized tasks, such as signal display and editing. Applications of the software include producing specialized sounds for music compositions or psychoacoustic experiments or as a basis for developing new synthesis algorithms.

  17. Analysis of soft x-ray emission spectra of laser-produced dysprosium, erbium and thulium plasmas

    NASA Astrophysics Data System (ADS)

    Sheil, John; Dunne, Padraig; Higashiguchi, Takeshi; Kos, Domagoj; Long, Elaine; Miyazaki, Takanori; O'Reilly, Fergal; O'Sullivan, Gerard; Sheridan, Paul; Suzuki, Chihiro; Sokell, Emma; White, Elgiva; Kilbane, Deirdre

    2017-03-01

    Soft x-ray emission spectra of dysprosium, erbium and thulium ions created in laser-produced plasmas were recorded with a flat-field grazing-incidence spectrometer in the 2.5-8 nm spectral range. The ions were produced using an Nd:YAG laser of 7 ns pulse duration and the spectra were recorded at various power densities. The experimental spectra were interpreted with the aid of the Cowan suite of atomic structure codes and the flexible atomic code. At wavelengths above 5.5 nm the spectra are dominated by overlapping n = 4 - n = 4 unresolved transition arrays from adjacent ion stages. Below 6 nm, n = 4 - n = 5 transitions also give rise to a series of interesting overlapping spectral features.

  18. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  19. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work on which is under way.

  20. Spectral and Atomic Physics Analysis of Xenon L-Shell Emission From High Energy Laser Produced Plasmas

    NASA Astrophysics Data System (ADS)

    Thorn, Daniel; Kemp, G. E.; Widmann, K.; Benjamin, R. D.; May, M. J.; Colvin, J. D.; Barrios, M. A.; Fournier, K. B.; Liedahl, D.; Moore, A. S.; Blue, B. E.

    2016-10-01

    The spectrum of the L-shell (n =2) radiation in mid to high-Z ions is useful for probing plasma conditions in the multi-keV temperature range. Xenon in particular with its L-shell radiation centered around 4.5 keV is copiously produced from plasmas with electron temperatures in the 5-10 keV range. We report on a series of time-resolved L-shell Xe spectra measured with the NIF X-ray Spectrometer (NXS) in high-energy long-pulse (>10 ns) laser produced plasmas at the National Ignition Facility. The resolving power of the NXS is sufficiently high (E/ ∂E >100) in the 4-5 keV spectral band that the emission from different charge states is observed. An analysis of the time resolved L-shell spectrum of Xe is presented along with spectral modeling by detailed radiation transport and atomic physics from the SCRAM code and comparison with predictions from HYDRA a radiation-hydrodynamics code with inline atomic-physics from CRETIN. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  1. Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code

    NASA Astrophysics Data System (ADS)

    Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2016-06-01

    We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.

  2. Synchrotron Radiation Workshop (SRW)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubar, O.; Elleaume, P.

    2013-03-01

    "Synchrotron Radiation Workshop" (SRW) is a physical optics computer code for calculation of detailed characteristics of Synchrotron Radiation (SR) generated by relativistic electrons in magnetic fields of arbitrary configuration and for simulation of the radiation wavefront propagation through optical systems of beamlines. Frequency-domain near-field methods are used for the SR calculation, and the Fourier-optics based approach is generally used for the wavefront propagation simulation. The code enables both fully- and partially-coherent radiation propagation simulations in steady-state and in frequency-/time-dependent regimes. With these features, the code has already proven its utility for a large number of applications in infrared, UV, softmore » and hard X-ray spectral range, in such important areas as analysis of spectral performances of new synchrotron radiation sources, optimization of user beamlines, development of new optical elements, source and beamline diagnostics, and even complete simulation of SR based experiments. Besides the SR applications, the code can be efficiently used for various simulations involving conventional lasers and other sources. SRW versions interfaced to Python and to IGOR Pro (WaveMetrics), as well as cross-platform library with C API, are available.« less

  3. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  4. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  5. Theoretical analysis of the performance of code division multiple access communications over multimode optical fiber channels. Part 1: Transmission and detection

    NASA Astrophysics Data System (ADS)

    Walker, Ernest L.

    1994-05-01

    This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.

  6. Comparative Modelling of the Spectra of Cool Giants

    NASA Technical Reports Server (NTRS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  7. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  8. Post-analysis report on Chesapeake Bay data processing. [spectral analysis and recognition computer signature extension

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1972-01-01

    The additional processing performed on data collected over the Rhode River Test Site and Forestry Site in November 1970 is reported. The techniques and procedures used to obtain the processed results are described. Thermal data collected over three approximately parallel lines of the site were contoured, and the results color coded, for the purpose of delineating important scene constituents and to identify trees attacked by pine bark beetles. Contouring work and histogram preparation are reviewed and the important conclusions from the spectral analysis and recognition computer (SPARC) signature extension work are summarized. The SPARC setup and processing records are presented and recommendations are made for future data collection over the site.

  9. Analysis of hybrid subcarrier multiplexing of OCDMA based on single photodiode detection

    NASA Astrophysics Data System (ADS)

    Ahmad, N. A. A.; Junita, M. N.; Aljunid, S. A.; Rashidi, C. B. M.; Endut, R.

    2017-11-01

    This paper analyzes the performance of subcarrier multiplexing (SCM) of spectral amplitude coding optical code multiple access (SAC-OCDMA) by applying Recursive Combinatorial (RC) code based on single photodiode detection (SPD). SPD is used in the receiver part to reduce the effect of multiple access interference (MAI) which contributes as a dominant noise in incoherent SAC-OCDMA systems. Results indicate that the SCM OCDMA network performance could be improved by using lower data rates and higher number of weight. Total number of users can also be enhanced by adding lower data rates and higher number of subcarriers.

  10. Physical and Hydrological Meaning of the Spectral Information from Hydrodynamic Signals at Karst Springs

    NASA Astrophysics Data System (ADS)

    Dufoyer, A.; Lecoq, N.; Massei, N.; Marechal, J. C.

    2017-12-01

    Physics-based modeling of karst systems remains almost impossible without enough accurate information about the inner physical characteristics. Usually, the only available hydrodynamic information is the flow rate at the karst outlet. Numerous works in the past decades have used and proven the usefulness of time-series analysis and spectral techniques applied to spring flow, precipitations or even physico-chemical parameters, for interpreting karst hydrological functioning. However, identifying or interpreting the karst systems physical features that control statistical or spectral characteristics of spring flow variations is still challenging, not to say sometimes controversial. The main objective of this work is to determine how the statistical and spectral characteristics of the hydrodynamic signal at karst springs can be related to inner physical and hydraulic properties. In order to address this issue, we undertake an empirical approach based on the use of both distributed and physics-based models, and on synthetic systems responses. The first step of the research is to conduct a sensitivity analysis of time-series/spectral methods to karst hydraulic and physical properties. For this purpose, forward modeling of flow through several simple, constrained and synthetic cases in response to precipitations is undertaken. It allows us to quantify how the statistical and spectral characteristics of flow at the outlet are sensitive to changes (i) in conduit geometries, and (ii) in hydraulic parameters of the system (matrix/conduit exchange rate, matrix hydraulic conductivity and storativity). The flow differential equations resolved by MARTHE, a computer code developed by the BRGM, allows karst conduits modeling. From signal processing on simulated spring responses, we hope to determine if specific frequencies are always modified, thanks to Fourier series and multi-resolution analysis. We also hope to quantify which parameters are the most variable with auto-correlation analysis: first results seem to show higher variations due to conduit conductivity than the ones due to matrix/conduit exchange rate. Future steps will be using another computer code, based on double-continuum approach and allowing turbulent conduit flow, and modeling a natural system.

  11. Design framework for a spectral mask for a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Berkner, Kathrin; Shroff, Sapna A.

    2012-01-01

    Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.

  12. SpectralNET – an application for spectral graph analysis and visualization

    PubMed Central

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-01-01

    Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170

  13. SpectralNET--an application for spectral graph analysis and visualization.

    PubMed

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-10-19

    Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.

  14. Spectral analysis of the turbulent mixing of two fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinkamp, M.J.

    1996-02-01

    The authors describe a spectral approach to the investigation of fluid instability, generalized turbulence, and the interpenetration of fluids across an interface. The technique also applies to a single fluid with large variations in density. Departures of fluctuating velocity components from the local mean are far subsonic, but the mean Mach number can be large. Validity of the description is demonstrated by comparisons with experiments on turbulent mixing due to the late stages of Rayleigh-Taylor instability, when the dynamics become approximately self-similar in response to a constant body force. Generic forms for anisotropic spectral structure are described and used asmore » a basis for deriving spectrally integrated moment equations that can be incorporated into computer codes for scientific and engineering analyses.« less

  15. Connection anonymity analysis in coded-WDM PONs

    NASA Astrophysics Data System (ADS)

    Sue, Chuan-Ching

    2008-04-01

    A coded wavelength division multiplexing passive optical network (WDM PON) is presented for fiber to the home (FTTH) systems to protect against eavesdropping. The proposed scheme applies spectral amplitude coding (SAC) with a unipolar maximal-length sequence (M-sequence) code matrix to generate a specific signature address (coding) and to retrieve its matching address codeword (decoding) by exploiting the cyclic properties inherent in array waveguide grating (AWG) routers. In addition to ensuring the confidentiality of user data, the proposed coded-WDM scheme is also a suitable candidate for the physical layer with connection anonymity. Under the assumption that the eavesdropper applies a photo-detection strategy, it is shown that the coded WDM PON outperforms the conventional TDM PON and WDM PON schemes in terms of a higher degree of connection anonymity. Additionally, the proposed scheme allows the system operator to partition the optical network units (ONUs) into appropriate groups so as to achieve a better degree of anonymity.

  16. Digital Equivalent Data System for XRF Labeling of Objects

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F.; Kaiser, Bruce

    2005-01-01

    A digital equivalent data system (DEDS) is a system for identifying objects by means of the x-ray fluorescence (XRF) spectra of labeling elements that are encased in or deposited on the objects. As such, a DEDS is a revolutionary new major subsystem of an XRF system. A DEDS embodies the means for converting the spectral data output of an XRF scanner to an ASCII alphanumeric or barcode label that can be used to identify (or verify the assumed or apparent identity of) an XRF-scanned object. A typical XRF spectrum of interest contains peaks at photon energies associated with specific elements on the Periodic Table (see figure). The height of each spectral peak above the local background spectral intensity is proportional to the relative abundance of the corresponding element. Alphanumeric values are assigned to the relative abundances of the elements. Hence, if an object contained labeling elements in suitably chosen proportions, an alphanumeric representation of the object could be extracted from its XRF spectrum. The mixture of labeling elements and for reading the XRF spectrum would be compatible with one of the labeling conventions now used for bar codes and binary matrix patterns (essentially, two-dimensional bar codes that resemble checkerboards). A further benefit of such compatibility is that it would enable the conversion of the XRF spectral output to a bar or matrix-coded label, if needed. In short, a process previously used only for material composition analysis has been reapplied to the world of identification. This new level of verification is now being used for "authentication."

  17. Heterodyne detection using spectral line pairing for spectral phase encoding optical code division multiple access and dynamic dispersion compensation.

    PubMed

    Yang, Yi; Foster, Mark; Khurgin, Jacob B; Cooper, A Brinton

    2012-07-30

    A novel coherent optical code-division multiple access (OCDMA) scheme is proposed that uses spectral line pairing to generate signals suitable for heterodyne decoding. Both signal and local reference are transmitted via a single optical fiber and a simple balanced receiver performs sourceless heterodyne detection, canceling speckle noise and multiple-access interference (MAI). To validate the idea, a 16 user fully loaded phase encoded system is simulated. Effects of fiber dispersion on system performance are studied as well. Both second and third order dispersion management is achieved by using a spectral phase encoder to adjust phase shifts of spectral components at the optical network unit (ONU).

  18. Storage and retrieval of mass spectral information

    NASA Technical Reports Server (NTRS)

    Hohn, M. E.; Humberston, M. J.; Eglinton, G.

    1977-01-01

    Computer handling of mass spectra serves two main purposes: the interpretation of the occasional, problematic mass spectrum, and the identification of the large number of spectra generated in the gas-chromatographic-mass spectrometric (GC-MS) analysis of complex natural and synthetic mixtures. Methods available fall into the three categories of library search, artificial intelligence, and learning machine. Optional procedures for coding, abbreviating and filtering a library of spectra minimize time and storage requirements. Newer techniques make increasing use of probability and information theory in accessing files of mass spectral information.

  19. VizieR Online Data Catalog: NLTE spectral analysis of white dwarf G191-B2B (Rauch+, 2013)

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Werner, K.; Bohlin, R.; Kruk, J. W.

    2013-08-01

    In the framework of the Virtual Observatory, the German Astrophysical Virtual Observatory developed the registered service TheoSSA. It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code. In case of the DA white dwarf G191-B2B, we demonstrate that the model reproduces not only its overall continuum shape but also the numerous metal lines exhibited in its ultraviolet spectrum. (3 data files).

  20. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    DOE PAGES

    Hakel, Peter

    2016-10-01

    Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  1. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    NASA Astrophysics Data System (ADS)

    Hakel, Peter

    2016-10-01

    We report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  2. SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  3. SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  4. Noise suppression system of OCDMA with spectral/spatial 2D hybrid code

    NASA Astrophysics Data System (ADS)

    Matem, Rima; Aljunid, S. A.; Junita, M. N.; Rashidi, C. B. M.; Shihab Aqrab, Israa

    2017-11-01

    In this paper, we propose a novel 2D spectral/spatial hybrid code based on 1D ZCC and 1D MD where the both present a zero cross correlation property analyzed and the influence of the noise of optical as Phase Induced Intensity Noise (PIIN), shot and thermal noise. This new code is shown effectively to mitigate the PIIN and suppresses MAI. Using 2D ZCC/MD code the performance of the system can be improved in term of as well as to support more simultaneous users compared of the 2D FCC/MDW and 2D DPDC codes.

  5. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  6. Variable weight spectral amplitude coding for multiservice OCDMA networks

    NASA Astrophysics Data System (ADS)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  7. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  8. GAME: GAlaxy Machine learning for Emission lines

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.

    2018-06-01

    We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.

  9. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  10. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  11. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  12. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  13. Description and availability of the SMARTS spectral model for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Myers, Daryl R.; Gueymard, Christian A.

    2004-11-01

    Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.

  14. Research in computational fluid dynamics and analysis of algorithms

    NASA Technical Reports Server (NTRS)

    Gottlieb, David

    1992-01-01

    Recently, higher-order compact schemes have seen increasing use in the DNS (Direct Numerical Simulations) of the Navier-Stokes equations. Although they do not have the spatial resolution of spectral methods, they offer significant increases in accuracy over conventional second order methods. They can be used on any smooth grid, and do not have an overly restrictive CFL dependence as compared with the O(N(exp -2)) CFL dependence observed in Chebyshev spectral methods on finite domains. In addition, they are generally more robust and less costly than spectral methods. The issue of the relative cost of higher-order schemes (accuracy weighted against physical and numerical cost) is a far more complex issue, depending ultimately on what features of the solution are sought and how accurately they must be resolved. In any event, the further development of the underlying stability theory of these schemes is important. The approach of devising suitable boundary clusters and then testing them with various stability techniques (such as finding the norm) is entirely the wrong approach when dealing with high-order methods. Very seldom are high-order boundary closures stable, making them difficult to isolate. An alternative approach is to begin with a norm which satisfies all the stability criteria for the hyperbolic system, and look for the boundary closure forms which will match the norm exactly. This method was used recently by Strand to isolate stable boundary closure schemes for the explicit central fourth- and sixth-order schemes. The norm used was an energy norm mimicking the norm for the differential equations. Further research should be devoted to BC for high order schemes in order to make sure that the results obtained are reliable. The compact fourth order and sixth order finite difference scheme had been incorporated into a code to simulate flow past circular cylinders. This code will serve as a verification of the full spectral codes. A detailed stability analysis by Carpenter (from the fluid Mechanics Division) and Gottlieb gave analytic conditions for stability as well as asymptotic stability. This had been incorporated in the code in form of stable boundary conditions. Effects of the cylinder rotations had been studied. The results differ from the known theoretical results. We are in the middle of analyzing the results. A detailed analysis of the effects of the heating of the cylinder on the shedding frequency had been studied using the above schemes. It has been found that the shedding frequency decreases when the wire was heated. Experimental work is being carried out to affirm this result.

  15. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  16. Finite-SNR analysis for partial relaying cooperation with channel coding and opportunistic relay selection

    NASA Astrophysics Data System (ADS)

    Vu, Thang X.; Duhamel, Pierre; Chatzinotas, Symeon; Ottersten, Bjorn

    2017-12-01

    This work studies the performance of a cooperative network which consists of two channel-coded sources, multiple relays, and one destination. To achieve high spectral efficiency, we assume that a single time slot is dedicated to relaying. Conventional network-coded-based cooperation (NCC) selects the best relay which uses network coding to serve the two sources simultaneously. The bit error rate (BER) performance of NCC with channel coding, however, is still unknown. In this paper, we firstly study the BER of NCC via a closed-form expression and analytically show that NCC only achieves diversity of order two regardless of the number of available relays and the channel code. Secondly, we propose a novel partial relaying-based cooperation (PARC) scheme to improve the system diversity in the finite signal-to-noise ratio (SNR) regime. In particular, closed-form expressions for the system BER and diversity order of PARC are derived as a function of the operating SNR value and the minimum distance of the channel code. We analytically show that the proposed PARC achieves full (instantaneous) diversity order in the finite SNR regime, given that an appropriate channel code is used. Finally, numerical results verify our analysis and demonstrate a large SNR gain of PARC over NCC in the SNR region of interest.

  17. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  18. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE PAGES

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan; ...

    2017-08-29

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  19. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  20. Fitting Analysis using Differential evolution Optimization (FADO):. Spectral population synthesis through genetic optimization under self-consistency boundary conditions

    NASA Astrophysics Data System (ADS)

    Gomes, J. M.; Papaderos, P.

    2017-07-01

    The goal of population spectral synthesis (pss; also referred to as inverse, semi-empirical evolutionary- or fossil record approach) is to decipher from the spectrum of a galaxy the mass, age and metallicity of its constituent stellar populations. This technique, which is the reverse of but complementary to evolutionary synthesis, has been established as fundamental tool in extragalactic research. It has been extensively applied to large spectroscopic data sets, notably the SDSS, leading to important insights into the galaxy assembly history. However, despite significant improvements over the past decade, all current pss codes suffer from two major deficiencies that inhibit us from gaining sharp insights into the star-formation history (SFH) of galaxies and potentially introduce substantial biases in studies of their physical properties (e.g., stellar mass, mass-weighted stellar age and specific star formation rate). These are I) the neglect of nebular emission in spectral fits, consequently; II) the lack of a mechanism that ensures consistency between the best-fitting SFH and the observed nebular emission characteristics of a star-forming (SF) galaxy (e.g., hydrogen Balmer-line luminosities and equivalent widths-EWs, shape of the continuum in the region around the Balmer and Paschen jump). In this article, we present fado (Fitting Analysis using Differential evolution Optimization) - a conceptually novel, publicly available pss tool with the distinctive capability of permitting identification of the SFH that reproduces the observed nebular characteristics of a SF galaxy. This so-far unique self-consistency concept allows us to significantly alleviate degeneracies in current spectral synthesis, thereby opening a new avenue to the exploration of the assembly history of galaxies. The innovative character of fado is further augmented by its mathematical foundation: fado is the first pss code employing genetic differential evolution optimization. This, in conjunction with various other currently unique elements in its mathematical concept and numerical realization (e.g., mid-analysis optimization of the spectral library using artificial intelligence, test for convergence through a procedure inspired by Markov chain Monte Carlo techniques, quasi-parallelization embedded within a modular architecture) results in key improvements with respect to computational efficiency and uniqueness of the best-fitting SFHs. Furthermore, fado incorporates within a single code the entire chain of pre-processing, modeling, post-processing, storage and graphical representation of the relevant output from pss, including emission-line measurements and estimates of uncertainties for all primary and secondary products from spectral synthesis (e.g., mass contributions of individual stellar populations, mass- and luminosity-weighted stellar ages and metallicities). This integrated concept greatly simplifies and accelerates a lengthy sequence of individual time-consuming steps that are generally involved in pss modeling, further enhancing the overall efficiency of the code and inviting to its automated application to large spectroscopic data sets. The distribution package of the FADO v.1 tool contains the binary and its auxiliary files. FADO v.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/603/A63

  1. The Open Spectral Database: an open platform for sharing and searching spectral data.

    PubMed

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  2. Lithographically Encrypted Inverse Opals for Anti-Counterfeiting Applications.

    PubMed

    Heo, Yongjoon; Kang, Hyelim; Lee, Joon-Seok; Oh, You-Kwan; Kim, Shin-Hyun

    2016-07-01

    Colloidal photonic crystals possess inimitable optical properties of iridescent structural colors and unique spectral shape, which render them useful for security materials. This work reports a novel method to encrypt graphical and spectral codes in polymeric inverse opals to provide advanced security. To accomplish this, this study prepares lithographically featured micropatterns on the top surface of hydrophobic inverse opals, which serve as shadow masks against the surface modification of air cavities to achieve hydrophilicity. The resultant inverse opals allow rapid infiltration of aqueous solution into the hydrophilic cavities while retaining air in the hydrophobic cavities. Therefore, the structural color of inverse opals is regioselectively red-shifted, disclosing the encrypted graphical codes. The decoded inverse opals also deliver unique reflectance spectral codes originated from two distinct regions. The combinatorial code composed of graphical and optical codes is revealed only when the aqueous solution agreed in advance is used for decoding. In addition, the encrypted inverse opals are chemically stable, providing invariant codes with high reproducibility. In addition, high mechanical stability enables the transfer of the films onto any surfaces. This novel encryption technology will provide a new opportunity in a wide range of security applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Swift/BAT Calibration and Spectral Response

    NASA Technical Reports Server (NTRS)

    Parsons, A.

    2004-01-01

    The Burst Alert Telescope (BAT) aboard NASA#s Swift Gamma-Ray Burst Explorer is a large coded aperture gamma-ray telescope consisting of a 2.4 m (8#) x 1.2 m (4#) coded aperture mask supported 1 meter above a 5200 square cm area detector plane containing 32,768 individual 4 mm x 4 mm x 2 mm CZT detectors. The BAT is now completely assembled and integrated with the Swift spacecraft in anticipation of an October 2004 launch. Extensive ground calibration measurements using a variety of radioactive sources have resulted in a moderately high fidelity model for the BAT spectral and photometric response. This paper describes these ground calibration measurements as well as related computer simulations used to study the efficiency and individual detector properties of the BAT detector array. The creation of a single spectral response model representative of the fully integrated BAT posed an interesting challenge and is at the heart of the public analysis tool #batdrmgen# which computes a response matrix for any given sky position within the BAT FOV. This paper will describe the batdrmgen response generator tool and conclude with a description of the on-orbit calibration plans as well as plans for the future improvements needed to produce the more detailed spectral response model that is required for the construction of an all-sky hard x-ray survey.

  4. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  5. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  6. Properties of laser-produced GaAs plasmas measured from highly resolved X-ray line shapes and ratios

    NASA Astrophysics Data System (ADS)

    Seely, J. F.; Fein, J.; Manuel, M.; Keiter, P.; Drake, P.; Kuranz, C.; Belancourt, Patrick; Ralchenko, Yu.; Hudson, L.; Feldman, U.

    2018-03-01

    The properties of hot, dense plasmas generated by the irradiation of GaAs targets by the Titan laser at Lawrence Livermore National Laboratory were determined by the analysis of high resolution K shell spectra in the 9 keV to 11 keV range. The laser parameters, such as relatively long pulse duration and large focal spot, were chosen to produce a steady-state plasma with minimal edge gradients, and the time-integrated spectra were compared to non-LTE steady state spectrum simulations using the FLYCHK and NOMAD codes. The bulk plasma streaming velocity was measured from the energy shifts of the Ga He-like transitions and Li-like dielectronic satellites. The electron density and the electron energy distribution, both the thermal and the hot non-thermal components, were determined from the spectral line ratios. After accounting for the spectral line broadening contributions, the plasma turbulent motion was measured from the residual line widths. The ionization balance was determined from the ratios of the He-like through F-like spectral features. The detailed comparison of the experimental Ga spectrum and the spectrum simulated by the FLYCHK code indicates two significant discrepancies, the transition energy of a Li-like dielectronic satellite (designated t) and the calculated intensity of a He-like line (x), that should lead to improvements in the kinetics codes used to simulate the X-ray spectra from highly-charged ions.

  7. The SpeX Prism Library Analysis Toolkit: Design Considerations and First Results

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam J.; Aganze, Christian; Escala, Ivana; Lopez, Mike; Choban, Caleb; Jin, Yuhui; Iyer, Aishwarya; Tallis, Melisa; Suarez, Adrian; Sahi, Maitrayee

    2016-01-01

    Various observational and theoretical spectral libraries now exist for galaxies, stars, planets and other objects, which have proven useful for classification, interpretation, simulation and model development. Effective use of these libraries relies on analysis tools, which are often left to users to develop. In this poster, we describe a program to develop a combined spectral data repository and Python-based analysis toolkit for low-resolution spectra of very low mass dwarfs (late M, L and T dwarfs), which enables visualization, spectral index analysis, classification, atmosphere model comparison, and binary modeling for nearly 2000 library spectra and user-submitted data. The SpeX Prism Library Analysis Toolkit (SPLAT) is being constructed as a collaborative, student-centered, learning-through-research model with high school, undergraduate and graduate students and regional science teachers, who populate the database and build the analysis tools through quarterly challenge exercises and summer research projects. In this poster, I describe the design considerations of the toolkit, its current status and development plan, and report the first published results led by undergraduate students. The combined data and analysis tools are ideal for characterizing cool stellar and exoplanetary atmospheres (including direct exoplanetary spectra observations by Gemini/GPI, VLT/SPHERE, and JWST), and the toolkit design can be readily adapted for other spectral datasets as well.This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NNX15AI75G. SPLAT code can be found at https://github.com/aburgasser/splat.

  8. Photon Throughput Calculations for a Spherical Crystal Spectrometer

    NASA Astrophysics Data System (ADS)

    Gilman, C. J.; Bitter, M.; Delgado-Aparicio, L.; Efthimion, P. C.; Hill, K.; Kraus, B.; Gao, L.; Pablant, N.

    2017-10-01

    X-ray imaging crystal spectrometers of the type described in Refs. have become a standard diagnostic for Doppler measurements of profiles of the ion temperature and the plasma flow velocities in magnetically confined, hot fusion plasmas. These instruments have by now been implemented on major tokamak and stellarator experiments in Korea, China, Japan, and Germany and are currently also being designed by PPPL for ITER. A still missing part in the present data analysis is an efficient code for photon throughput calculations to evaluate the chord-integrated spectral data. The existing ray tracing codes cannot be used for a data analysis between shots, since they require extensive and time consuming numerical calculations. Here, we present a detailed analysis of the geometrical properties of the ray pattern. This method allows us to minimize the extent of numerical calculations and to create a more efficient code. This work was performed under the auspices of the U.S. Department of Energy by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466.

  9. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620

  10. Radio Astronomy Tools in Python: Spectral-cube, pvextractor, and more

    NASA Astrophysics Data System (ADS)

    Ginsburg, A.; Robitaille, T.; Beaumont, C.; Rosolowsky, E.; Leroy, A.; Brogan, C.; Hunter, T.; Teuben, P.; Brisbin, D.

    2015-12-01

    The radio-astro-tools organization has been established to facilitate development of radio and millimeter analysis tools by the scientific community. The first packages developed under its umbrella are: • The spectral-cube package, for reading, writing, and analyzing spectral data cubes • The pvextractor package for extracting position-velocity slices from position-position-velocity cubes along aribitrary paths • The radio-beam package to handle gaussian beams in the context of the astropy quantity and unit framework • casa-python to enable installation of these packages - and any other - into users' CASA environments without conflicting with the underlying CASA package. Community input in the form of code contributions, suggestions, questions and commments is welcome on all of these tools. They can all be found at http://radio-astro-tools.github.io.

  11. Spectral simulation of unsteady compressible flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Gottlieb, David

    1990-01-01

    An unsteady compressible viscous wake flow past a circular cylinder was successfully simulated using spectral methods. A new approach in using the Chebyshev collocation method for periodic problems is introduced. It was further proved that the eigenvalues associated with the differentiation matrix are purely imaginary, reflecting the periodicity of the problem. It was been shown that the solution of a model problem has exponential growth in time if improper boundary conditions are used. A characteristic boundary condition, which is based on the characteristics of the Euler equations of gas dynamics, was derived for the spectral code. The primary vortex shedding frequency computed agrees well with the results in the literature for Mach = 0.4, Re = 80. No secondary frequency is observed in the power spectrum analysis of the pressure data.

  12. Quantitative spectroscopy of extreme helium stars Model atmospheres and a non-LTE abundance analysis of BD+10°2179

    NASA Astrophysics Data System (ADS)

    Kupfer, T.; Przybilla, N.; Heber, U.; Jeffery, C. S.; Behara, N. T.; Butler, K.

    2017-10-01

    Extreme helium stars (EHe stars) are hydrogen-deficient supergiants of spectral type A and B. They are believed to result from mergers in double degenerate systems. In this paper, we present a detailed quantitative non-LTE spectral analysis for BD+10°2179, a prototype of this rare class of stars, using UV-Visual Echelle Spectrograph and Fiber-fed Extended Range Optical Spectrograph spectra covering the range from ˜3100 to 10 000 Å. Atmosphere model computations were improved in two ways. First, since the UV metal line blanketing has a strong impact on the temperature-density stratification, we used the atlas12 code. Additionally, We tested atlas12 against the benchmark code sterne3, and found only small differences in the temperature and density stratifications, and good agreement with the spectral energy distributions. Secondly, 12 chemical species were treated in non-LTE. Pronounced non-LTE effects occur in individual spectral lines but, for the majority, the effects are moderate to small. The spectroscopic parameters give Teff =17 300±300 K and log g = 2.80±0.10, and an evolutionary mass of 0.55±0.05 M⊙. The star is thus slightly hotter, more compact and less massive than found in previous studies. The kinematic properties imply a thick-disc membership, which is consistent with the metallicity [Fe/H] ≈ -1 and α-enhancement. The refined light-element abundances are consistent with the white dwarf merger scenario. We further discuss the observed helium spectrum in an appendix, detecting dipole-allowed transitions from about 150 multiplets plus the most comprehensive set of known/predicted isolated forbidden components to date. Moreover, a so far unreported series of pronounced forbidden He I components is detected in the optical-UV.

  13. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  14. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  15. HiC-spector: a matrix library for spectral and reproducibility analysis of Hi-C contact maps.

    PubMed

    Yan, Koon-Kiu; Yardimci, Galip Gürkan; Yan, Chengfei; Noble, William S; Gerstein, Mark

    2017-07-15

    Genome-wide proximity ligation based assays like Hi-C have opened a window to the 3D organization of the genome. In so doing, they present data structures that are different from conventional 1D signal tracks. To exploit the 2D nature of Hi-C contact maps, matrix techniques like spectral analysis are particularly useful. Here, we present HiC-spector, a collection of matrix-related functions for analyzing Hi-C contact maps. In particular, we introduce a novel reproducibility metric for quantifying the similarity between contact maps based on spectral decomposition. The metric successfully separates contact maps mapped from Hi-C data coming from biological replicates, pseudo-replicates and different cell types. Source code in Julia and Python, and detailed documentation is available at https://github.com/gersteinlab/HiC-spector . koonkiu.yan@gmail.com or mark@gersteinlab.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. Preliminary consideration on the seismic actions recorded during the 2016 Central Italy seismic sequence

    NASA Astrophysics Data System (ADS)

    Carlo Ponzo, Felice; Ditommaso, Rocco; Nigro, Antonella; Nigro, Domenico S.; Iacovino, Chiara

    2017-04-01

    After the Mw 6.0 mainshock of August 24, 2016 at 03.36 a.m. (local time), with the epicenter located between the towns of Accumoli (province of Rieti), Amatrice (province of Rieti) and Arquata del Tronto (province of Ascoli Piceno), several activities were started in order to perform some preliminary evaluations on the characteristics of the recent seismic sequence in the areas affected by the earthquake. Ambient vibration acquisitions have been performed using two three-directional velocimetric synchronized stations, with a natural frequency equal to 0.5Hz and a digitizer resolution of equal to 24bit. The activities are continuing after the events of the seismic sequence of October 26 and October 30, 2016. In this paper, in order to compare recorded and code provision values in terms of peak (PGA, PGV and PGD), spectral and integral (Housner Intensity) seismic parameters, several preliminary analyses have been performed on accelerometric time-histories acquired by three near fault station of the RAN (Italian Accelerometric Network): Amatrice station (station code AMT), Norcia station (station code NRC) and Castelsantangelo sul Nera station (station code CNE). Several comparisons between the elastic response spectra derived from accelerometric recordings and the elastic demand spectra provided by the Italian seismic code (NTC 2008) have been performed. Preliminary results retrieved from these analyses highlight several apparent difference between experimental data and conventional code provision. Then, the ongoing seismic sequence appears compatible with the historical seismicity in terms of integral parameters, but not in terms of peak and spectral values. It seems appropriate to reconsider the necessity to revise the simplified design approach based on the conventional spectral values. Acknowledgements This study was partially funded by the Italian Department of Civil Protection within the project DPC-RELUIS 2016 - RS4 ''Seismic observatory of structures and health monitoring'' and by the "Centre of Integrated Geomorphology for the Mediterranean Area - CGIAM" within the Framework Agreement with the University of Basilicata "Study, Research and Experimentation in the Field of Analysis and Monitoring of Seismic Vulnerability of Strategic and Relevant Buildings for the purposes of Civil Protection and Development of Innovative Strategies of Seismic Reinforcement".

  17. Gas and dust spectral analysis of galactic and extragalactic symbiotic stars

    NASA Astrophysics Data System (ADS)

    Angeloni, Rodolfo

    2009-02-01

    Symbiotic stars are recognized as unique laboratories for studying a large variety of phenomena that are relevant to a number of important astro-physical problems. This PhD thesis deals with a spectral analysis of galactic and extragalactic symbiotic stars. The former are mainly D-type symbiotic stars for which a comprehensive study, from radio to X-ray spectral region, has been performed. With the latter, we refer to symbiotic stars in the Magellanic Clouds, to be analyzed mainly in the IR range. The common theoretical scenario that lies in the background of this work is the colliding-wind model, developed already during the 80's, supported by first observational evidence at the beginning of 90's (mainly thanks to Nussbaumer and collaborators), and finally completed with detailed and powerful hydrodynamical simulations by various authors in these recent years. In the light of this scenario, we have tried to interpret gas and dust spectra of our targets in a unique and self-consistent way. The spectral analysis has been performed by means of the numerical code SUMA, developed at the Instituto Astronomico e Geofisico of the University of Sao Paulo by Sueli M. Viegas (Aldrovandi) and Marcella Contini from the School of Physics and Astronomy of the Tel-Aviv University.

  18. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen

    The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less

  20. Theoretical White Dwarf Spectra on Demand: TheoSSA

    NASA Astrophysics Data System (ADS)

    Ringat, E.; Rauch, T.

    2010-11-01

    In the last decades, a lot of progress was made in spectral analysis. The quality (e.g. resolution, S/N ratio) of observed spectra has improved much and several model-atmosphere codes were developed. One of these is the ``Tübingen NLTE Model-Atmosphere Package'' (TMAP), that is a highly developed program for the calculation of model atmospheres of hot, compact objects. In the framework of the German Astrophysical Virtual Observatory (GAVO), theoretical spectral energy distributions (SEDs) can be downloaded via TheoSSA. In a pilot phase, TheoSSA is based on TMAP model atmospheres. We present the current state of this VO service.

  1. Capabilities, methodologies, and use of the cambio file-translation application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lasche, George P.

    2007-03-01

    This report describes the capabilities, methodologies, and uses of the Cambio computer application, designed to automatically read and display nuclear spectral data files of any known format in the world and to convert spectral data to one of several commonly used analysis formats. To further assist responders, Cambio incorporates an analysis method based on non-linear fitting techniques found in open literature and implemented in openly published source code in the late 1980s. A brief description is provided of how Cambio works, of what basic formats it can currently read, and how it can be used. Cambio was developed at Sandiamore » National Laboratories and is provided as a free service to assist nuclear emergency response analysts anywhere in the world in the fight against nuclear terrorism.« less

  2. Pseudo-spectral methods applied to gravitational collapse.

    NASA Astrophysics Data System (ADS)

    Bonazzola, S.; Marck, J.-A.

    The authors present codes for solving Newtonian gravitational collapse in spherical coordinates for the spherical, axial and true 3 D cases. The pseudo-spectral techniques are used. All quantities are expanded in Chebychev or Legendre polynomials or Fourier series for the periodic parts. The codes are able to handle in a rigorous way the pseudo-singularities τ = 0 and θ = 0, π. Illustrative results for each of the three cases are given.

  3. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  4. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    NASA Astrophysics Data System (ADS)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  5. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  6. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  7. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    PubMed

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  8. CO2 laser modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Barry

    1992-01-01

    The topics covered include the following: (1) CO2 laser kinetics modeling; (2) gas lifetimes in pulsed CO2 lasers; (3) frequency chirp and laser pulse spectral analysis; (4) LAWS A' Design Study; and (5) discharge circuit components for LAWS. The appendices include LAWS Memos, computer modeling of pulsed CO2 lasers for lidar applications, discharge circuit considerations for pulsed CO2 lidars, and presentation made at the Code RC Review.

  9. History of one family of atmospheric radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.

    1994-12-01

    Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.

  10. The Berkeley SuperNova Ia Program (BSNIP): Dataset and Initial Analysis

    NASA Astrophysics Data System (ADS)

    Silverman, Jeffrey; Ganeshalingam, M.; Kong, J.; Li, W.; Filippenko, A.

    2012-01-01

    I will present spectroscopic data from the Berkeley SuperNova Ia Program (BSNIP), their initial analysis, and the results of attempts to use spectral information to improve cosmological distance determinations to Type Ia supernova (SNe Ia). The dataset consists of 1298 low-redshift (z< 0.2) optical spectra of 582 SNe Ia observed from 1989 through the end of 2008. Many of the SNe have well-calibrated light curves with measured distance moduli as well as spectra that have been corrected for host-galaxy contamination. I will also describe the spectral classification scheme employed (using the SuperNova Identification code, SNID; Blondin & Tonry 2007) which utilizes a newly constructed set of SNID spectral templates. The sheer size of the BSNIP dataset and the consistency of the observation and reduction methods make this sample unique among all other published SN Ia datasets. I will also discuss measurements of the spectral features of about one-third of the spectra which were obtained within 20 days of maximum light. I will briefly describe the adopted method of automated, robust spectral-feature definition and measurement which expands upon similar previous studies. Comparisons of these measurements of SN Ia spectral features to photometric observables will be presented with an eye toward using spectral information to calculate more accurate cosmological distances. Finally, I will comment on related projects which also utilize the BSNIP dataset that are planned for the near future. This research was supported by NSF grant AST-0908886 and the TABASGO Foundation. I am grateful to Marc J. Staley for a Graduate Fellowship.

  11. Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zuchowski, Loïc; Brun, Michael; De Martin, Florent

    2018-05-01

    The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.

  12. Spectral characteristics of convolutionally coded digital signals

    NASA Technical Reports Server (NTRS)

    Divsalar, D.

    1979-01-01

    The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.

  13. Mass-invariance of the iron enrichment in the hot haloes of massive ellipticals, groups, and clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Mernier, F.; de Plaa, J.; Werner, N.; Kaastra, J. S.; Raassen, A. J. J.; Gu, L.; Mao, J.; Urdampilleta, I.; Truong, N.; Simionescu, A.

    2018-05-01

    X-ray measurements find systematically lower Fe abundances in the X-ray emitting haloes pervading groups (kT ≲ 1.7 keV) than in clusters of galaxies. These results have been difficult to reconcile with theoretical predictions. However, models using incomplete atomic data or the assumption of isothermal plasmas may have biased the best fit Fe abundance in groups and giant elliptical galaxies low. In this work, we take advantage of a major update of the atomic code in the spectral fitting package SPEX to re-evaluate the Fe abundance in 43 clusters, groups, and elliptical galaxies (the CHEERS sample) in a self-consistent analysis and within a common radius of 0.1r500. For the first time, we report a remarkably similar average Fe enrichment in all these systems. Unlike previous results, this strongly suggests that metals are synthesised and transported in these haloes with the same average efficiency across two orders of magnitude in total mass. We show that the previous metallicity measurements in low temperature systems were biased low due to incomplete atomic data in the spectral fitting codes. The reasons for such a code-related Fe bias, also implying previously unconsidered biases in the emission measure and temperature structure, are discussed.

  14. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  15. Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing

    NASA Astrophysics Data System (ADS)

    Salamone, Joseph A., III

    Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.

  16. Active spectroscopic measurements of the bulk deuterium properties in the DIII-D tokamak (invited).

    PubMed

    Grierson, B A; Burrell, K H; Chrystal, C; Groebner, R J; Kaplan, D H; Heidbrink, W W; Muñoz Burgos, J M; Pablant, N A; Solomon, W M; Van Zeeland, M A

    2012-10-01

    The neutral-beam induced D(α) emission spectrum contains a wealth of information such as deuterium ion temperature, toroidal rotation, density, beam emission intensity, beam neutral density, and local magnetic field strength magnitude |B| from the Stark-split beam emission spectrum, and fast-ion D(α) emission (FIDA) proportional to the beam-injected fast ion density. A comprehensive spectral fitting routine which accounts for all photoemission processes is employed for the spectral analysis. Interpretation of the measurements to determine physically relevant plasma parameters is assisted by the use of an optimized viewing geometry and forward modeling of the emission spectra using a Monte-Carlo 3D simulation code.

  17. A comparison of IBC with 1997 UBC for modal response spectrum analysis in standard-occupancy buildings

    NASA Astrophysics Data System (ADS)

    Nahhas, Tariq M.

    2011-03-01

    This paper presents a comparison of the seismic forces generated from a Modal Response Spectrum Analysis (MRSA) by applying the provisions of two building codes, the 1997 Uniform Building Code (UBC) and the 2000-2009 International Building Code (IBC), to the most common ordinary residential buildings of standard occupancy. Considering IBC as the state of the art benchmark code, the primary concern is the safety of buildings designed using the UBC as compared to those designed using the IBC. A sample of four buildings with different layouts and heights was used for this comparison. Each of these buildings was assumed to be located at four different geographical sample locations arbitrarily selected to represent various earthquake zones on a seismic map of the USA, and was subjected to code-compliant response spectrum analyses for all sample locations and for five different soil types at each location. Response spectrum analysis was performed using the ETABS software package. For all the cases investigated, the UBC was found to be significantly more conservative than the IBC. The UBC design response spectra have higher spectral accelerations, and as a result, the response spectrum analysis provided a much higher base shear and moment in the structural members as compared to the IBC. The conclusion is that ordinary office and residential buildings designed using UBC 1997 are considered to be overdesigned, and therefore they are quite safe even according to the IBC provisions.

  18. Fundamental period of Italian reinforced concrete buildings: comparison between numerical, experimental and Italian code simplified values

    NASA Astrophysics Data System (ADS)

    Ditommaso, Rocco; Carlo Ponzo, Felice; Auletta, Gianluca; Iacovino, Chiara; Nigro, Antonella

    2015-04-01

    Aim of this study is a comparison among the fundamental period of reinforced concrete buildings evaluated using the simplified approach proposed by the Italian Seismic code (NTC 2008), numerical models and real values retrieved from an experimental campaign performed on several buildings located in Basilicata region (Italy). With the intention of proposing simplified relationships to evaluate the fundamental period of reinforced concrete buildings, scientists and engineers performed several numerical and experimental campaigns, on different structures all around the world, to calibrate different kind of formulas. Most of formulas retrieved from both numerical and experimental analyses provides vibration periods smaller than those suggested by the Italian seismic code. However, it is well known that the fundamental period of a structure play a key role in the correct evaluation of the spectral acceleration for seismic static analyses. Generally, simplified approaches impose the use of safety factors greater than those related to in depth nonlinear analyses with the aim to cover possible unexpected uncertainties. Using the simplified formula proposed by the Italian seismic code the fundamental period is quite higher than fundamental periods experimentally evaluated on real structures, with the consequence that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear and nonlinear seismic static analyses. Finally, the authors suggest a possible update of the Italian seismic code formula for the simplified estimation of the fundamental period of vibration of existing RC buildings, taking into account both elastic and inelastic structural behaviour and the interaction between structural and non-structural elements. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2014 - RS4 ''Seismic observatory of structures and health monitoring''. References R. Ditommaso, M. Vona, M. R. Gallipoli and M. Mucciarelli (2013). Evaluation and considerations about fundamental periods of damaged reinforced concrete buildings. Nat. Hazards Earth Syst. Sci., 13, 1903-1912, 2013. www.nat-hazards-earth-syst-sci.net/13/1903/2013. doi:10.5194/nhess-13-1903-2013

  19. Development and evaluation of a Hadamard transform imaging spectrometer and a Hadamard transform thermal imager

    NASA Technical Reports Server (NTRS)

    Harwit, M.; Swift, R.; Wattson, R.; Decker, J.; Paganetti, R.

    1976-01-01

    A spectrometric imager and a thermal imager, which achieve multiplexing by the use of binary optical encoding masks, were developed. The masks are based on orthogonal, pseudorandom digital codes derived from Hadamard matrices. Spatial and/or spectral data is obtained in the form of a Hadamard transform of the spatial and/or spectral scene; computer algorithms are then used to decode the data and reconstruct images of the original scene. The hardware, algorithms and processing/display facility are described. A number of spatial and spatial/spectral images are presented. The achievement of a signal-to-noise improvement due to the signal multiplexing was also demonstrated. An analysis of the results indicates both the situations for which the multiplex advantage may be gained, and the limitations of the technique. A number of potential applications of the spectrometric imager are discussed.

  20. Quantifying Vocal Mimicry in the Greater Racket-Tailed Drongo: A Comparison of Automated Methods and Human Assessment

    PubMed Central

    Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini

    2014-01-01

    Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717

  1. A Numerical Investigation of Turbine Noise Source Hierarchy and Its Acoustic Transmission Characteristics: Proof-of-Concept Progress

    NASA Technical Reports Server (NTRS)

    VanZante, Dale; Envia, Edmane

    2008-01-01

    A CFD-based simulation of single-stage turbine was done using the TURBO code to assess its viability for determining acoustic transmission through blade rows. Temporal and spectral analysis of the unsteady pressure data from the numerical simulations showed the allowable Tyler-Sofrin modes that are consistent with expectations. This indicated that high-fidelity acoustic transmission calculations are feasible with TURBO.

  2. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  3. Measuring Collimator Infrared (IR) Spectral Transmission

    DTIC Science & Technology

    2016-05-01

    TECHNICAL REPORT RDMR-WD-16-15 MEASURING COLLIMATOR INFRARED (IR) SPECTRAL TRANSMISSION Christopher L. Dobbins Weapons...AND DATES COVERED Final 4. TITLE AND SUBTITLE Measuring Collimator Infrared (IR) Spectral Transmission 5. FUNDING NUMBERS 6. AUTHOR(S) Christopher L...release; distribution is unlimited. 12b. DISTRIBUTION CODE A 13. ABSTRACT (Maximum 200 Words) Several Infrared (IR) imaging systems have been measured

  4. Analysis Of AVIRIS Data From LEO-15 Using Tafkaa Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Montes, Marcos J.; Gao, Bo-Cai; Davis, Curtiss O.; Moline, Mark

    2004-01-01

    We previously developed an algorithm named Tafkaa for atmospheric correction of remote sensing ocean color data from aircraft and satellite platforms. The algorithm allows quick atmospheric correction of hyperspectral data using lookup tables generated with a modified version of Ahmad & Fraser s vector radiative transfer code. During the past few years we have extended the capabilities of the code. Current modifications include the ability to account for within scene variation in solar geometry (important for very long scenes) and view geometries (important for wide fields of view). Additionally, versions of Tafkaa have been made for a variety of multi-spectral sensors, including SeaWiFS and MODIS. In this proceeding we present some initial results of atmospheric correction of AVIRIS data from the 2001 July Hyperspectral Coastal Ocean Dynamics Experiment (HyCODE) at LEO-15.

  5. Radio-nuclide mixture identification using medium energy resolution detectors

    DOEpatents

    Nelson, Karl Einar

    2013-09-17

    According to one embodiment, a method for identifying radio-nuclides includes receiving spectral data, extracting a feature set from the spectral data comparable to a plurality of templates in a template library, and using a branch and bound method to determine a probable template match based on the feature set and templates in the template library. In another embodiment, a device for identifying unknown radio-nuclides includes a processor, a multi-channel analyzer, and a memory operatively coupled to the processor, the memory having computer readable code stored thereon. The computer readable code is configured, when executed by the processor, to receive spectral data, to extract a feature set from the spectral data comparable to a plurality of templates in a template library, and to use a branch and bound method to determine a probable template match based on the feature set and templates in the template library.

  6. LAMOST OBSERVATIONS IN THE KEPLER FIELD: SPECTRAL CLASSIFICATION WITH THE MKCLASS CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, R. O.; Corbally, C. J.; Cat, P. De

    2016-01-15

    The LAMOST-Kepler project was designed to obtain high-quality, low-resolution spectra of many of the stars in the Kepler field with the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) spectroscopic telescope. To date 101,086 spectra of 80,447 objects over the entire Kepler field have been acquired. Physical parameters, radial velocities, and rotational velocities of these stars will be reported in other papers. In this paper we present MK spectral classifications for these spectra determined with the automatic classification code MKCLASS. We discuss the quality and reliability of the spectral types and present histograms showing the frequency of the spectralmore » types in the main table organized according to luminosity class. Finally, as examples of the use of this spectral database, we compute the proportion of A-type stars that are Am stars, and identify 32 new barium dwarf candidates.« less

  7. New H-band Stellar Spectral Libraries for the SDSS-III/APOGEE Survey

    NASA Astrophysics Data System (ADS)

    Zamora, O.; García-Hernández, D. A.; Allende Prieto, C.; Carrera, R.; Koesterke, L.; Edvardsson, B.; Castelli, F.; Plez, B.; Bizyaev, D.; Cunha, K.; García Pérez, A. E.; Gustafsson, B.; Holtzman, J. A.; Lawler, J. E.; Majewski, S. R.; Manchado, A.; Mészáros, Sz.; Shane, N.; Shetrone, M.; Smith, V. V.; Zasowski, G.

    2015-06-01

    The Sloan Digital Sky Survey-III (SDSS-III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) has obtained high-resolution (R ˜ 22,500), high signal-to-noise ratio (\\gt 100) spectra in the H-band (˜1.5-1.7 μm) for about 146,000 stars in the Milky Way galaxy. We have computed spectral libraries with effective temperature ({{T}eff}) ranging from 3500 to 8000 K for the automated chemical analysis of the survey data. The libraries, used to derive stellar parameters and abundances from the APOGEE spectra in the SDSS-III data release 12 (DR12), are based on ATLAS9 model atmospheres and the ASSɛT spectral synthesis code. We present a second set of libraries based on MARCS model atmospheres and the spectral synthesis code Turbospectrum. The ATLAS9/ASSɛT ({{T}eff} = 3500-8000 K) and MARCS/Turbospectrum ({{T}eff} = 3500-5500 K) grids cover a wide range of metallicity (-2.5 ≤slant [M/H] ≤slant +0.5 dex), surface gravity (0 ≤ log g ≤slant 5 dex), microturbulence (0.5 ≤slant ξ ≤slant 8 km s-1), carbon (-1 ≤slant [C/M] ≤slant +1 dex), nitrogen (-1 ≤slant [N/M] ≤slant +1 dex), and α-element (-1 ≤slant [α/M] ≤slant +1 dex) variations, having thus seven dimensions. We compare the ATLAS9/ASSɛT and MARCS/Turbospectrum libraries and apply both of them to the analysis of the observed H-band spectra of the Sun and the K2 giant Arcturus, as well as to a selected sample of well-known giant stars observed at very high resolution. The new APOGEE libraries are publicly available and can be employed for chemical studies in the H-band using other high-resolution spectrographs.

  8. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  9. X-ray spectral signatures of photoionized plasmas. [astrophysics

    NASA Technical Reports Server (NTRS)

    Liedahl, Duane A.; Kahn, Steven M.; Osterheld, Albert L.; Goldstein, William H.

    1990-01-01

    Plasma emission codes have become a standard tool for the analysis of spectroscopic data from cosmic X-ray sources. However, the assumption of collisional equilibrium, typically invoked in these codes, renders them inapplicable to many important astrophysical situations, particularly those involving X-ray photoionized nebulae. This point is illustrated by comparing model spectra which have been calculated under conditions appropriate to both coronal plasmas and X-ray photoionized plasmas. It is shown that the (3s-2p)/(3d-2p) line ratios in the Fe L-shell spectrum can be used to effectively discriminate between these two cases. This diagnostic will be especially useful for data analysis associated with AXAF and XMM, which will carry spectroscopic instrumentation with sufficient sensitivity and resolution to identify X-ray photoionized nebulae in a wide range of astrophysical environments.

  10. Characterization of a hybrid target multi-keV x-ray source by a multi-parameter statistical analysis of titanium K-shell emission

    DOE PAGES

    Primout, M.; Babonneau, D.; Jacquet, L.; ...

    2015-11-10

    We studied the titanium K-shell emission spectra from multi-keV x-ray source experiments with hybrid targets on the OMEGA laser facility. Using the collisional-radiative TRANSPEC code, dedicated to K-shell spectroscopy, we reproduced the main features of the detailed spectra measured with the time-resolved MSPEC spectrometer. We developed a general method to infer the N e, T e and T i characteristics of the target plasma from the spectral analysis (ratio of integrated Lyman-α to Helium-α in-band emission and the peak amplitude of individual line ratios) of the multi-keV x-ray emission. Finally, these thermodynamic conditions are compared to those calculated independently bymore » the radiation-hydrodynamics transport code FCI2.« less

  11. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  12. A Parameter Study for Modeling Mg ii h and k Emission during Solar Flares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio da Costa, Fatima; Kleint, Lucia, E-mail: frubio@stanford.edu

    2017-06-20

    Solar flares show highly unusual spectra in which the thermodynamic conditions of the solar atmosphere are encoded. Current models are unable to fully reproduce the spectroscopic flare observations, especially the single-peaked spectral profiles of the Mg ii h and k lines. We aim to understand the formation of the chromospheric and optically thick Mg ii h and k lines in flares through radiative transfer calculations. We take a flare atmosphere obtained from a simulation with the radiative hydrodynamic code RADYN as input for a radiative transfer modeling with the RH code. By iteratively changing this model atmosphere and varying thermodynamicmore » parameters such as temperature, electron density, and velocity, we study their effects on the emergent intensity spectra. We reproduce the typical single-peaked Mg ii h and k flare spectral shape and approximate the intensity ratios to the subordinate Mg ii lines by increasing either densities, temperatures, or velocities at the line core formation height range. Additionally, by combining unresolved upflows and downflows up to ∼250 km s{sup −1} within one resolution element, we reproduce the widely broadened line wings. While we cannot unambiguously determine which mechanism dominates in flares, future modeling efforts should investigate unresolved components, additional heat dissipation, larger velocities, and higher densities and combine the analysis of multiple spectral lines.« less

  13. Stellar and wind parameters of massive stars from spectral analysis

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.

    2017-07-01

    The only way to deduce information from stars is to decode the radiation it emits in an appropriate way. Spectroscopy can solve this and derive many properties of stars. In this work we seek to derive simultaneously the stellar and wind characteristics of A and B supergiant stars. Our stellar properties encompass the effective temperature, the surface gravity, the stellar radius, the micro-turbulence velocity, the rotational velocity and, finally, the chemical composition. For wind properties we consider the mass-loss rate, the terminal velocity and the line-force parameters (α, k and δ) obtained from the standard line-driven wind theory. To model the data we use the radiative transport code Fastwind considering the newest hydrodynamical solutions derived with Hydwind code, which needs stellar and line-force parameters to obtain a wind solution. A grid of spectral models of massive stars is created and together with the observed spectra their physical properties are determined through spectral line fittings. These fittings provide an estimation about the line-force parameters, whose theoretical calculations are extremely complex. Furthermore, we expect to confirm that the hydrodynamical solutions obtained with a value of δ slightly larger than ˜ 0.25, called δ-slow solutions, describe quite reliable the radiation line-driven winds of A and late B supergiant stars and at the same time explain disagreements between observational data and theoretical models for the Wind-Momentum Luminosity Relationship (WLR).

  14. Stellar and wind parameters of massive stars from spectral analysis

    NASA Astrophysics Data System (ADS)

    Araya, Ignacio; Curé, Michel

    2017-11-01

    The only way to deduce information from stars is to decode the radiation it emits in an appropriate way. Spectroscopy can solve this and derive many properties of stars. In this work we seek to derive simultaneously the stellar and wind characteristics of a wide range of massive stars. Our stellar properties encompass the effective temperature, the surface gravity, the stellar radius, the micro-turbulence velocity, the rotational velocity and the Si abundance. For wind properties we consider the mass-loss rate, the terminal velocity and the line-force parameters α, k and δ (from the line-driven wind theory). To model the data we use the radiative transport code Fastwind considering the newest hydrodynamical solutions derived with Hydwind code, which needs stellar and line-force parameters to obtain a wind solution. A grid of spectral models of massive stars is created and together with the observed spectra their physical properties are determined through spectral line fittings. These fittings provide an estimation about the line-force parameters, whose theoretical calculations are extremely complex. Furthermore, we expect to confirm that the hydrodynamical solutions obtained with a value of δ slightly larger than ~ 0.25, called δ-slow solutions, describe quite reliable the radiation line-driven winds of A and late B supergiant stars and at the same time explain disagreements between observational data and theoretical models for the Wind-Momentum Luminosity Relationship (WLR).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebrat, J. F.; Plaschy, M.; Tommasi, J.

    This paper presents the analysis of some selected experiments of the MUSE-4 program with the ERANOS-2.1 code system using either the JEF-2.2 library or the more recent JEFF-3.1 one; focus has been given to the reactivities calculations and to the spectral indices of the MUSE-4-Reference and SC3-Pb core configurations. Both libraries provide comparable results on the k{sub eff} of the Reference configuration due to large negative and positive compensating reactivity effects, whereas there is a -400 pcm effect on SC3-Pb. A perturbation analysis demonstrates that the large negative total reactivity effect in this configuration comes from the increase of themore » lead contribution and from the decrease of the sodium contribution. The new library improves the C/E for all the spectral indices in the fuel zone and for most of them in the lead zone except for {sup 238}U and {sup 243}Am. (authors)« less

  16. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  17. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  18. Probabilistic Seismic Hazard Assessment for Iraq

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, Tuna; Gok, Rengin; Abdulnaby, Wathiq

    Probabilistic Seismic Hazard Assessments (PSHA) form the basis for most contemporary seismic provisions in building codes around the world. The current building code of Iraq was published in 1997. An update to this edition is in the process of being released. However, there are no national PSHA studies in Iraq for the new building code to refer to for seismic loading in terms of spectral accelerations. As an interim solution, the new draft building code was considering to refer to PSHA results produced in the late 1990s as part of the Global Seismic Hazard Assessment Program (GSHAP; Giardini et al.,more » 1999). However these results are: a) more than 15 years outdated, b) PGA-based only, necessitating rough conversion factors to calculate spectral accelerations at 0.3s and 1.0s for seismic design, and c) at a probability level of 10% chance of exceedance in 50 years, not the 2% that the building code requires. Hence there is a pressing need for a new, updated PSHA for Iraq.« less

  19. Comparisons of 'Identical' Simulations by the Eulerian Gyrokinetic Codes GS2 and GYRO

    NASA Astrophysics Data System (ADS)

    Bravenec, R. V.; Ross, D. W.; Candy, J.; Dorland, W.; McKee, G. R.

    2003-10-01

    A major goal of the fusion program is to be able to predict tokamak transport from first-principles theory. To this end, the Eulerian gyrokinetic code GS2 was developed years ago and continues to be improved [1]. Recently, the Eulerian code GYRO was developed [2]. These codes are not subject to the statistical noise inherent to particle-in-cell (PIC) codes, and have been very successful in treating electromagnetic fluctuations. GS2 is fully spectral in the radial coordinate while GYRO uses finite-differences and ``banded" spectral schemes. To gain confidence in nonlinear simulations of experiment with these codes, ``apples-to-apples" comparisons (identical profile inputs, flux-tube geometry, two species, etc.) are first performed. We report on a series of linear and nonlinear comparisons (with overall agreement) including kinetic electrons, collisions, and shaped flux surfaces. We also compare nonlinear simulations of a DIII-D discharge to measurements of not only the fluxes but also the turbulence parameters. [1] F. Jenko, et al., Phys. Plasmas 7, 1904 (2000) and refs. therein. [2] J. Candy, J. Comput. Phys. 186, 545 (2003).

  20. Shot noise startup of the 6 NM SASE FEL at the Tesla Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierini, P.; Fawley, W.M.

    We present here an analysis of the shot noise startup of the 6 nm SASE FEI proposal at the TESLA Test Facility in DESY The statistical of the saturation length and output power due to the intrinsic randomness of the noise startup are investigated with the use of the 2D time dependent code GINGER, that takes into account propagation effects and models shot noise. We then provide estimates for the spectral contents and linewidth of the emitted radiation and describe its spiking characteristics. The output radiation will develop superradiant spikes seeded by the shot noise in the electron beam, whichmore » can entrance the average emitted power at the expense of some spectral broadening.« less

  1. An Adaptive Approach to a 2.4 kb/s LPC Speech Coding System.

    DTIC Science & Technology

    1985-07-01

    laryngeal cancer ). Spectral estimation is at the foundation of speech analysis for all these goals and accurate AR model estimation in noise is...S ,5 mWnL NrinKt ) o ,-G p (d va Rmea.imn flU: 5() WOM Lu M(G)INUNM 40 4KeemS! MU= 1 UD M5) SIGHSM A SO= WAGe . M. (d) I U NS maIm ( IW vis MAMA

  2. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  3. A Novel Technique to Detect Code for SAC-OCDMA System

    NASA Astrophysics Data System (ADS)

    Bharti, Manisha; Kumar, Manoj; Sharma, Ajay K.

    2018-04-01

    The main task of optical code division multiple access (OCDMA) system is the detection of code used by a user in presence of multiple access interference (MAI). In this paper, new method of detection known as XOR subtraction detection for spectral amplitude coding OCDMA (SAC-OCDMA) based on double weight codes has been proposed and presented. As MAI is the main source of performance deterioration in OCDMA system, therefore, SAC technique is used in this paper to eliminate the effect of MAI up to a large extent. A comparative analysis is then made between the proposed scheme and other conventional detection schemes used like complimentary subtraction detection, AND subtraction detection and NAND subtraction detection. The system performance is characterized by Q-factor, BER and received optical power (ROP) with respect to input laser power and fiber length. The theoretical and simulation investigations reveal that the proposed detection technique provides better quality factor, security and received power in comparison to other conventional techniques. The wide opening of eye in case of proposed technique also proves its robustness.

  4. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  5. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  6. The Virtual Observatory Service TheoSSA: Establishing a Database of Synthetic Stellar Flux Standards II. NLTE Spectral Analysis of the OB-Type Subdwarf Feige 110

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.

    2014-01-01

    Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.

  7. EEGgui: a program used to detect electroencephalogram anomalies after traumatic brain injury.

    PubMed

    Sick, Justin; Bray, Eric; Bregy, Amade; Dietrich, W Dalton; Bramlett, Helen M; Sick, Thomas

    2013-05-21

    Identifying and quantifying pathological changes in brain electrical activity is important for investigations of brain injury and neurological disease. An example is the development of epilepsy, a secondary consequence of traumatic brain injury. While certain epileptiform events can be identified visually from electroencephalographic (EEG) or electrocorticographic (ECoG) records, quantification of these pathological events has proved to be more difficult. In this study we developed MATLAB-based software that would assist detection of pathological brain electrical activity following traumatic brain injury (TBI) and present our MATLAB code used for the analysis of the ECoG. Software was developed using MATLAB(™) and features of the open access EEGLAB. EEGgui is a graphical user interface in the MATLAB programming platform that allows scientists who are not proficient in computer programming to perform a number of elaborate analyses on ECoG signals. The different analyses include Power Spectral Density (PSD), Short Time Fourier analysis and Spectral Entropy (SE). ECoG records used for demonstration of this software were derived from rats that had undergone traumatic brain injury one year earlier. The software provided in this report provides a graphical user interface for displaying ECoG activity and calculating normalized power density using fast fourier transform of the major brain wave frequencies (Delta, Theta, Alpha, Beta1, Beta2 and Gamma). The software further detects events in which power density for these frequency bands exceeds normal ECoG by more than 4 standard deviations. We found that epileptic events could be identified and distinguished from a variety of ECoG phenomena associated with normal changes in behavior. We further found that analysis of spectral entropy was less effective in distinguishing epileptic from normal changes in ECoG activity. The software presented here was a successful modification of EEGLAB in the Matlab environment that allows detection of epileptiform ECoG signals in animals after TBI. The code allows import of large EEG or ECoG data records as standard text files and uses fast fourier transform as a basis for detection of abnormal events. The software can also be used to monitor injury-induced changes in spectral entropy if required. We hope that the software will be useful for other investigators in the field of traumatic brain injury and will stimulate future advances of quantitative analysis of brain electrical activity after neurological injury or disease.

  8. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  9. Spectral Cauchy Characteristic Extraction: Gravitational Waves and Gauge Free News

    NASA Astrophysics Data System (ADS)

    Handmer, Casey; Szilagyi, Bela; Winicour, Jeff

    2015-04-01

    We present a fast, accurate spectral algorithm for the characteristic evolution of the full non-linear vacuum Einstein field equations in the Bondi framework. Developed within the Spectral Einstein Code (SpEC), we demonstrate how spectral Cauchy characteristic extraction produces gravitational News without confounding gauge effects. We explain several numerical innovations and demonstrate speed, stability, accuracy, exponential convergence, and consistency with existing methods. We highlight its capability to deliver physical insights in the study of black hole binaries.

  10. EDDIE Seismology: Introductory spectral analysis for undergraduates

    NASA Astrophysics Data System (ADS)

    Soule, D. C.; Gougis, R.; O'Reilly, C.

    2016-12-01

    We present a spectral seismology lesson in which students use spectral analysis to describe the frequency of seismic arrivals based on a conceptual presentation of waveforms and filters. The goal is for students to surpass basic waveform terminology and relate a time domain signals to their conjugates in the frequency domain. Although seismology instruction commonly engages students in analysis of authentic seismological data, this is less true for lower-level undergraduate seismology instruction due to coding barriers to many seismological analysis tasks. To address this, our module uses Seismic Canvas (Kroeger, 2015; https://seiscode.iris.washington.edu/projects/seismiccanvas), a graphically interactive application for accessing, viewing and analyzing waveform data, which we use to plot earthquake data in the time domain. Once students are familiarized with the general components of the waveform (i.e. frequency, wavelength, amplitude and period), they use Seismic Canvas to transform the data into the frequency domain. Bypassing the mathematics of Fourier Series allows focus on conceptual understanding by plotting and manipulating seismic data in both time and frequency domains. Pre/post-tests showed significant improvements in students' use of seismograms and spectrograms to estimate the frequency content of the primary wave, which demonstrated students' understanding of frequency and how data on the spectrogram and seismogram are related. Students were also able to identify the time and frequency of the largest amplitude arrival, indicating understanding of amplitude and use of a spectrogram as an analysis tool. Students were also asked to compare plots of raw data and the same data filtered with a high-pass filter, and identify the filter used to create the second plot. Students demonstrated an improved understanding of how frequency content can be removed from a signal in the spectral domain.

  11. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    signal recovering [?, ?]. The time-varying coded apertures can be implemented using micro-piezo motors [?] or through the use of Digital Micromirror ...feasibility of this testbed by developing a Digital- Micromirror -Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement...Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, ”Development of a digital- micromirror - device- based multishot snapshot spectral imaging

  12. Utilizing Spectrum Efficiently (USE)

    DTIC Science & Technology

    2011-02-28

    18 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and...numerical results. 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and Spectral Efficiency In [60] multiple...supported at a given signal-to-interference ratio in asynchronous direct-sequence code-division multiple-access ( DS - CDMA ) sys- tems was examined. It was

  13. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.

  14. Three-dimensional dominant frequency mapping using autoregressive spectral analysis of atrial electrograms of patients in persistent atrial fibrillation.

    PubMed

    Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S

    2016-03-08

    Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.

  15. Spectral Analysis, Synthesis, & Energy Distributions of Nearby E+A Galaxies Using SDSS-IV MaNGA

    NASA Astrophysics Data System (ADS)

    Weaver, Olivia A.; Anderson, Miguel Ricardo; Wally, Muhammad; James, Olivia; Falcone, Julia; Liu, Allen; Wallack, Nicole; Liu, Charles; SDSS Collaboration

    2017-01-01

    Utilizing data from the Mapping Nearby Galaxies at APO (MaNGA) Survey (MaNGA Product Launch-4, or MPL-4), of the latest generation of the Sloan Digital Sky Survey (SDSS-IV), we identified nine post-starburst (E+A) systems that lie within the Green Valley transition zone. We identify the E+A galaxies by their SDSS single fiber spectrum and u-r color, then confirmed their classification as post-starburst by coding/plotting methods and spectral synthesis codes (FIREFLY and PIPE3D), as well as with their Spectral Energy Distributions (SEDs) from 0.15 µm to 22 µm, using GALEX, SDSS, 2MASS, and WISE data. We produced maps of gaussian-fitted fluxes, equivalent widths, stellar velocities, metallicities and age. We also produced spectral line ratio diagrams to classify regions of stellar populations of the galaxies. We found that our sample of E+As retain their post-starburst properties across the entire galaxy, not just at their center. We detected matching a trend line in the ultraviolet and optical bands, consistent with the expected SEDs for an E+A galaxy, and also through the J, H and Ks bands, except for one object. We classified one of the nine galaxies as a luminous infrared galaxy, unusual for a post-starburst object. Our group seeks to further study stellar population properties, spectral energy distributions and quenching properties in E+A galaxies, and investigate their role in galaxy evolution as a whole. This work was supported by the Alfred P. Sloan Foundation via the SDSS-IV Faculty and Student Team (FAST) initiative, ARC Agreement #SSP483 to the CUNY College of Staten Island. This work was also supported by grants to The American Museum of Natural History, and the CUNY College of Staten Island through from National Science Foundation.

  16. Recent Livermore Excitation and Dielectronic Recombination Measurements for Laboratory and Astrophysical Spectral Modeling

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, P.; Brown, G. V.; Gu, M.-F.; Harris, C. L.; Kahn, S. M.; Kim, S.-H.; Neill, P. A.; Savin, D. W.; Smith, A. J.; Utter, S. B.

    2000-01-01

    Using the EBIT facility in Livermore we produce definitive atomic data for input into spectral synthesis codes. Recent measurements of line excitation and dielectronic recombination of highly charged K-shell and L-shell ions are presented to illustrate this point.

  17. The identification and characterization of non-coding and coding RNAs and their modified nucleosides by mass spectrometry

    PubMed Central

    Gaston, Kirk W; Limbach, Patrick A

    2014-01-01

    The analysis of ribonucleic acids (RNA) by mass spectrometry has been a valuable analytical approach for more than 25 years. In fact, mass spectrometry has become a method of choice for the analysis of modified nucleosides from RNA isolated out of biological samples. This review summarizes recent progress that has been made in both nucleoside and oligonucleotide mass spectral analysis. Applications of mass spectrometry in the identification, characterization and quantification of modified nucleosides are discussed. At the oligonucleotide level, advances in modern mass spectrometry approaches combined with the standard RNA modification mapping protocol enable the characterization of RNAs of varying lengths ranging from low molecular weight short interfering RNAs (siRNAs) to the extremely large 23 S rRNAs. New variations and improvements to this protocol are reviewed, including top-down strategies, as these developments now enable qualitative and quantitative measurements of RNA modification patterns in a variety of biological systems. PMID:25616408

  18. The identification and characterization of non-coding and coding RNAs and their modified nucleosides by mass spectrometry.

    PubMed

    Gaston, Kirk W; Limbach, Patrick A

    2014-01-01

    The analysis of ribonucleic acids (RNA) by mass spectrometry has been a valuable analytical approach for more than 25 years. In fact, mass spectrometry has become a method of choice for the analysis of modified nucleosides from RNA isolated out of biological samples. This review summarizes recent progress that has been made in both nucleoside and oligonucleotide mass spectral analysis. Applications of mass spectrometry in the identification, characterization and quantification of modified nucleosides are discussed. At the oligonucleotide level, advances in modern mass spectrometry approaches combined with the standard RNA modification mapping protocol enable the characterization of RNAs of varying lengths ranging from low molecular weight short interfering RNAs (siRNAs) to the extremely large 23 S rRNAs. New variations and improvements to this protocol are reviewed, including top-down strategies, as these developments now enable qualitative and quantitative measurements of RNA modification patterns in a variety of biological systems.

  19. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  20. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  1. Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation

    NASA Astrophysics Data System (ADS)

    Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward

    1988-08-01

    A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.

  2. pyblocxs: Bayesian Low-Counts X-ray Spectral Analysis in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, A.; Kashyap, V.; Refsdal, B.; van Dyk, D.; Connors, A.; Park, T.

    2011-07-01

    Typical X-ray spectra have low counts and should be modeled using the Poisson distribution. However, χ2 statistic is often applied as an alternative and the data are assumed to follow the Gaussian distribution. A variety of weights to the statistic or a binning of the data is performed to overcome the low counts issues. However, such modifications introduce biases or/and a loss of information. Standard modeling packages such as XSPEC and Sherpa provide the Poisson likelihood and allow computation of rudimentary MCMC chains, but so far do not allow for setting a full Bayesian model. We have implemented a sophisticated Bayesian MCMC-based algorithm to carry out spectral fitting of low counts sources in the Sherpa environment. The code is a Python extension to Sherpa and allows to fit a predefined Sherpa model to high-energy X-ray spectral data and other generic data. We present the algorithm and discuss several issues related to the implementation, including flexible definition of priors and allowing for variations in the calibration information.

  3. Highly-Damped Spectral Acceleration as a Ground Motion Intensity Measure for Estimating Collapse Vulnerability of Buildings

    NASA Astrophysics Data System (ADS)

    Buyco, K.; Heaton, T. H.

    2016-12-01

    Current U.S. seismic code and performance-based design recommendations quantify ground motion intensity using 5%-damped spectral acceleration when estimating the collapse vulnerability of buildings. This intensity measure works well for predicting inter-story drift due to moderate shaking, but other measures have been shown to be better for estimating collapse risk.We propose using highly-damped (>10%) spectral acceleration to assess collapse vulnerability. As damping is increased, the spectral acceleration at a given period T begins to behave like a weighted average of the corresponding lowly-damped (i.e. 5%) spectrum at a range of periods. Weights for periods longer than T increase as damping increases. Using high damping is physically intuitive for two reasons. Firstly, ductile buildings dissipate a large amount of hysteretic energy before collapse and thus behave more like highly-damped systems. Secondly, heavily damaged buildings experience period-lengthening, giving further credence to the weighted-averaging property of highly-damped spectral acceleration.To determine the optimal damping value(s) for this ground motion intensity measure, we conduct incremental dynamic analysis for a suite of ground motions on several different mid-rise steel buildings and select the damping value yielding the lowest dispersion of intensity at the collapse threshold. Spectral acceleration calculated with damping as high as 70% has been shown to be a better indicator of collapse than that with 5% damping.

  4. RIO: a new computational framework for accurate initial data of binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-06-01

    We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.

  5. Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices around the protostars were processed by the radiation comes from central object. In addition, we were able also to compare the observational data for this two objects with those obtained in the modeling. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5).

  6. Wide spectral-range imaging spectroscopy of photonic crystal microbeads for multiplex biomolecular assay applications

    NASA Astrophysics Data System (ADS)

    Li, Jianping

    2014-05-01

    Suspension assay using optically color-encoded microbeads is a novel way to increase the reaction speed and multiplex of biomolecular detection and analysis. To boost the detection speed, a hyperspectral imaging (HSI) system is of great interest for quickly decoding the color codes of the microcarriers. Imaging Fourier transform spectrometer (IFTS) is a potential candidate for this task due to its advantages in HSI measurement. However, conventional IFTS is only popular in IR spectral bands because it is easier to track its scanning mirror position in longer wavelengths so that the fundamental Nyquist criterion can be satisfied when sampling the interferograms; the sampling mechanism for shorter wavelengths IFTS used to be very sophisticated, high-cost and bulky. In order to overcome this handicap and take better usage of its advantages for HSI applications, a new wide spectral range IFTS platform is proposed based on an optical beam-folding position-tracking technique. This simple technique has successfully extended the spectral range of an IFTS to cover 350-1000nm. Test results prove that the system has achieved good spectral and spatial resolving performances with instrumentation flexibilities. Accurate and fast measurement results on novel colloidal photonic crystal microbeads also demonstrate its practical potential for high-throughput and multiplex suspension molecular assays.

  7. FSFE: Fake Spectra Flux Extractor

    NASA Astrophysics Data System (ADS)

    Bird, Simeon

    2017-10-01

    The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.

  8. Label swapper device for spectral amplitude coded optical packet networks monolithically integrated on InP.

    PubMed

    Muñoz, P; García-Olcina, R; Habib, C; Chen, L R; Leijtens, X J M; de Vries, T; Robbins, D; Capmany, J

    2011-07-04

    In this paper the design, fabrication and experimental characterization of an spectral amplitude coded (SAC) optical label swapper monolithically integrated on Indium Phosphide (InP) is presented. The device has a footprint of 4.8x1.5 mm2 and is able to perform label swapping operations required in SAC at a speed of 155 Mbps. The device was manufactured in InP using a multiple purpose generic integration scheme. Compared to previous SAC label swapper demonstrations, using discrete component assembly, this label swapper chip operates two order of magnitudes faster.

  9. Adaptive coding of MSS imagery. [Multi Spectral band Scanners

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.

    1977-01-01

    A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.

  10. X-Ray, EUV, UV and Optical Emissivities of Astrophysical Plasmas

    NASA Technical Reports Server (NTRS)

    Raymond, John C.; West, Donald (Technical Monitor)

    2000-01-01

    This grant primarily covered the development of the thermal X-ray emission model code called APEC, which is meant to replace the Raymond and Smith (1977) code. The new code contains far more spectral lines and a great deal of updated atomic data. The code is now available (http://hea-www.harvard.edu/APEC), though new atomic data is still being added, particularly at longer wavelengths. While initial development of the code was funded by this grant, current work is carried on by N. Brickhouse, R. Smith and D. Liedahl under separate funding. Over the last five years, the grant has provided salary support for N. Brickhouse, R. Smith, a summer student (L. McAllister), an SAO predoctoral fellow (A. Vasquez), and visits by T. Kallman, D. Liedahl, P. Ghavamian, J.M. Laming, J. Li, P. Okeke, and M. Martos. In addition to the code development, the grant supported investigations into X-ray and UV spectral diagnostics as applied to shock waves in the ISM, accreting black holes and white dwarfs, and stellar coronae. Many of these efforts are continuing. Closely related work on the shock waves and coronal mass ejections in the solar corona has grown out of the efforts supported by the grant.

  11. Star clusters: age, metallicity and extinction from integrated spectra

    NASA Astrophysics Data System (ADS)

    González Delgado, Rosa M.; Cid Fernandes, Roberto

    2010-01-01

    Integrated optical spectra of star clusters in the Magellanic Clouds and a few Galactic globular clusters are fitted using high-resolution spectral models for single stellar populations. The goal is to estimate the age, metallicity and extinction of the clusters, and evaluate the degeneracies among these parameters. Several sets of evolutionary models that were computed with recent high-spectral-resolution stellar libraries (MILES, GRANADA, STELIB), are used as inputs to the starlight code to perform the fits. The comparison of the results derived from this method and previous estimates available in the literature allow us to evaluate the pros and cons of each set of models to determine star cluster properties. In addition, we quantify the uncertainties associated with the age, metallicity and extinction determinations resulting from variance in the ingredients for the analysis.

  12. VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems

    NASA Technical Reports Server (NTRS)

    Gee, T. H.; Geist, J. M.

    1973-01-01

    Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.

  13. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  14. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE PAGES

    Makwana, K. D.; Zhdankin, V.; Li, H.; ...

    2015-04-10

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  15. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D.; Zhdankin, V.; Li, H.

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  16. Analysis of unresolved transition arrays in XUV spectral region from highly charged lead ions produced by subnanosecond laser pulse

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Higashiguchi, Takeshi; Li, Bowen; Arai, Goki; Hara, Hiroyuki; Kondo, Yoshiki; Miyazaki, Takanori; Dinh, Thanh-Hung; O'Reilly, Fergal; Sokell, Emma; O'Sullivan, Gerry

    2017-02-01

    Soft x-ray and extreme ultraviolet (XUV) spectra from lead (Pb, Z=82) laser-produced plasmas (LPPs) were measured in the 1.0-7.0 nm wavelength region employing a 150-ps, 1064-nm Nd:YAG laser with focused power densities in the range from 3.1×1013 W/cm2 to 1.4×1014 W/cm2. The flexible atomic code (FAC) and the Cowan's suite of atomic structure codes were applied to compute and explain the radiation properties of the lead spectra observed. The most prominent structure in the spectra is a broad double peak, which is produced by Δn=0, n=4-4 and Δn=1, n=4-5 transition arrays emitted from highly charged lead ions. The emission characteristics of Δn=1, n=4-5 transitions were investigated by the use of the unresolved transition arrays (UTAs) model. Numerous new spectral features generated by Δn=1, n=4-5 transitions in ions from Pb21+ to Pb45+ are discerned with the aid of the results from present computations as well as consideration of previous theoretical predictions and experimental data.

  17. A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere

    NASA Astrophysics Data System (ADS)

    Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael

    2017-05-01

    We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.

  18. Statistical Investigation of Supersonic Downflows in the Transition Region above Sunspots

    NASA Astrophysics Data System (ADS)

    Samanta, Tanmoy; Tian, Hui; Prasad Choudhary, Debi

    2018-06-01

    Downflows at supersonic speeds have been observed in the transition region (TR) above sunspots for more than three decades. These downflows are often seen in different TR spectral lines above sunspots. We have performed a statistical investigation of these downflows using a large sample that was missing previously. The Interface Region Imaging Spectrograph (IRIS) has provided a wealth of observational data of sunspots at high spatial and spectral resolutions in the past few years. We have identified 60 data sets obtained with IRIS raster scans. Using an automated code, we identified the locations of strong downflows within these sunspots. We found that around 80% of our sample shows supersonic downflows in the Si IV 1403 Å line. These downflows mostly appear in the penumbral regions, though some of them are found in the umbrae. We also found that almost half of these downflows show signatures in chromospheric lines. Furthermore, a detailed spectral analysis was performed by selecting a small spectral window containing the O IV 1400/1401 Å and Si IV 1403 Å lines. Six Gaussian functions were simultaneously fitted to these three spectral lines and their satellite lines associated with the supersonic downflows. We calculated the intensity, Doppler velocity, and line width for these lines. Using the O IV 1400/1401 Å line ratio, we find that the downflow components are around one order of magnitude less dense than the regular components. Results from our statistical analysis suggest that these downflows may originate from the corona and that they are independent of the background TR plasma.

  19. In-Scene-Based Atmospheric Correction of Uncalibrated VISible-SWIR (VIS-SWIR) Hyper- and Multispectral Imagery

    DTIC Science & Technology

    2008-01-01

    resolution , it is very likely that near-zero reflectance values exist in each spectral channel, corresponding to the minimum data values in the scene...radiometrically uncalibrated data. Quite good agreement was previously demonstrated for the retrieved pixel spectral reflectances between QUAC and the physics...precluding the use of physics-based codes to retrieve surface reflectance. The ability to retrieve absolute spectral reflectances from such sensors

  20. Diagnosis of the GLAS climate model's stationary planetary waves using a linearized steady state model

    NASA Technical Reports Server (NTRS)

    Youngblut, C.

    1984-01-01

    Orography and geographically fixed heat sources which force a zonally asymmetric motion field are examined. An extensive space-time spectral analysis of the GLAS climate model (D130) response and observations are compared. An updated version of the model (D150) showed a remarkable improvement in the simulation of the standing waves. The main differences in the model code are an improved boundary layer flux computation and a more realistic specification of the global boundary conditions.

  1. JURASSIC Retrieval Processing

    NASA Astrophysics Data System (ADS)

    Blank, J.; Ungermann, J.; Guggenmoser, T.; Kaufmann, M.; Riese, M.

    2012-04-01

    The Gimballed Limb Observer for Radiance Imaging in the Atmosphere (GLORIA) is an aircraft based infrared limb-sounder. This presentation will give an overview of the retrieval techniques used for the analysis of data produced by the GLORIA instrument. For data processing, the JUelich RApid Spectral SImulation Code 2 (JURASSIC2) was developed. It consists of a set of programs to retrieve atmospheric profiles from GLORIA measurements. The GLORIA Michelson interferometer can run with a wide range of parameters. In the dynamics mode, spectra are generate with a medium spectral and a very high temporal and spatial resolution. Each sample can contain thousands of spectral lines for each contributing trace gas. In the JURASSIC retrieval code this is handled by using a radiative transport model based on the Emissivity Growth Approximation. Deciding which samples should be included in the retrieval is a non-trivial task and requires specific domain knowledge. To ease this problem we developed an automatic selection program by analysing the Shannon information content. By taking into account data for all relevant trace gases and instrument effects, optimal integrated spectral windows are computed. This includes considerations for cross-influence of trace gases, which has non-obvious consequence for the contribution of spectral samples. We developed methods to assess the influence of spectral windows on the retrieval. While we can not exhaustively search the whole range of possible spectral sample combinations, it is possible to optimize information content using a genetic algorithm. The GLORIA instrument is mounted with a viewing direction perpendicular to the flight direction. A gimbal frame makes it possible to move the instrument 45° to both direction. By flying on a circular path, it is possible to generate images of an area of interest from a wide range of angles. These can be analyzed in a 3D-tomographic fashion, which yields superior spatial resolution along line of site. Usually limb instruments have a resolution of several hundred kilometers. In studies we have shown to get a resolution of 35km in all horizontal directions. Even when only linear flight patterns can be realized, resolutions of ≈70km can be obtained. This technique can be used to observe features of the Upper Troposphere Lower Stratosphere (UTLS), where important mixing processes take place. Especially tropopause folds are difficult to image, as their main features need to be along line of flight when using common 1D approach.

  2. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  3. Reproducibility of peripapillary retinal nerve fiber layer thickness with spectral domain cirrus high-definition optical coherence tomography in normal eyes.

    PubMed

    Hong, Samin; Kim, Chan Yun; Lee, Won Seok; Seong, Gong Je

    2010-01-01

    To assess the reproducibility of the new spectral domain Cirrus high-definition optical coherence tomography (HD-OCT; Carl Zeiss Meditec, Dublin, CA, USA) for analysis of peripapillary retinal nerve fiber layer (RNFL) thickness in healthy eyes. Thirty healthy Korean volunteers were enrolled. Three optic disc cube 200 x 200 Cirrus HD-OCT scans were taken on the same day in discontinuous sessions by the same operator without using the repeat scan function. The reproducibility of the calculated RNFL thickness and probability code were determined by the intraclass correlation coefficient (ICC), coefficient of variation (CV), test-retest variability, and Fleiss' generalized kappa (kappa). Thirty-six eyes were analyzed. For average RNFL thickness, the ICC was 0.970, CV was 2.38%, and test-retest variability was 4.5 microm. For all quadrants except the nasal, ICCs were 0.972 or higher and CVs were 4.26% or less. Overall test-retest variability ranged from 5.8 to 8.1 microm. The kappa value of probability codes for average RNFL thickness was 0.690. The kappa values of quadrants and clock-hour sectors were lower in the nasal areas than in other areas. The reproducibility of Cirrus HD-OCT to analyze peripapillary RNFL thickness in healthy eyes was excellent compared with the previous reports for time domain Stratus OCT. For the calculated RNFL thickness and probability code, variability was relatively higher in the nasal area, and more careful analyses are needed.

  4. The feasibility of well-logging measurements of arsenic levels using neutron-activation analysis

    USGS Publications Warehouse

    Oden, C.P.; Schweitzer, J.S.; McDowell, G.M.

    2006-01-01

    Arsenic is an extremely toxic metal, which poses a significant problem in many mining environments. Arsenic contamination is also a major problem in ground and surface waters. A feasibility study was conducted to determine if neutron-activation analysis is a practical method of measuring in situ arsenic levels. The response of hypothetical well-logging tools to arsenic was simulated using a readily available Monte Carlo simulation code (MCNP). Simulations were made for probes with both hyperpure germanium (HPGe) and bismuth germanate (BGO) detectors using accelerator and isotopic neutron sources. Both sources produce similar results; however, the BGO detector is much more susceptible to spectral interference than the HPGe detector. Spectral interference from copper can preclude low-level arsenic measurements when using the BGO detector. Results show that a borehole probe could be built that would measure arsenic concentrations of 100 ppm by weight to an uncertainty of 50 ppm in about 15 min. ?? 2006 Elsevier Ltd. All rights reserved.

  5. Multi-stage decoding of multi-level modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  6. Spectral amplification models for response spectrum addressing the directivity effect

    NASA Astrophysics Data System (ADS)

    Moghimi, Saed; Akkar, Sinan

    2017-04-01

    Ground motions with forward directivity effects are known with their significantly large spectral ordinates in medium-to-long periods. The large spectral ordinates stem from the impulsive characteristics of the forward directivity ground motions. The quantification of these spectral amplifications requires the identification of major seismological parameters that play a role in their generation. After running a suite of probabilistic seismic hazard analysis, Moghimi and Akkar (2016) have shown that fault slip rate, fault characteristic magnitude, fault-site geometry as well as mean annual exceedance rate are important parameters that determine the level of spectral amplification due to directivity. These parameters are considered to develop two separate spectral amplification equations in this study. The proposed equations rely on Shahi and Baker (SHB11; 2011) and Chiou and Spudich (CHS13; Spudic et al., 2013) narrow-band forward directivity models. The presented equations only focus on the estimation of maximum spectral amplifications that occur at the ends of the fault segments. This way we eliminate the fault-site parameter in our equations for simplification. The proposed equations show different trends due to differences in the narrow-band directivity models of SHB11 and CHS13. The equations given in this study can form bases for describing forward directivity effects in seismic design codes. REFERENCES Shahi. S., Baker, J.W. (2011), "An Empirically Calibrated Framework for Including the Effects of Near-Fault Directivity in Probabilistic Seismic Hazard Analysis", Bulletin of the Seismological Society of America, 101(2): 742-755. Spudich, P., Watson-Lamprey, J., Somerville, P., Bayless, J., Shahi, S. K., Baker, J. W., Rowshandel, B., and Chiou, B. (2013), "Final Report of the NGA-West2 Directivity Working Group", PEER Report 2013/09. Moghimi. S., Akkar, S. (2016), "Implications of Forward Directivity Effects on Design Ground Motions", Seismological Society of America, Annual meeting, 2016, Reno, Nevada, 87:2B Pg. 464

  7. Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Palmer, J. M. (Principal Investigator)

    1985-01-01

    The results of analyses of Thematic Mapper (TM) images acquired on July 8 and October 28, 1984, and of a check of the calibration of the 1.22-m integrating sphere at Santa Barbara Research Center (SBRC) are described. The results obtained from the in-flight calibration attempts disagree with the pre-flight calibrations for bands 2 and 4. Considerable effort was expended in an attempt to explain the disagreement. The difficult point to explain is that the difference between the radiances predicted by the radiative transfer code (the code radiances) and the radiances predicted by the preflight calibration (the pre-flight radiances) fluctuate with spectral band. Because the spectral quantities measured at White Sands show little change with spectral band, these fluctuations are not anticipated. Analyses of other targets at White Sands such as clouds, cloud shadows, and water surfaces tend to support the pre-flight and internal calibrator calibrations. The source of the disagreement has not been identified. It could be due to: (1) a computational error in the data reduction; (2) an incorrect assumption in the input to the radiative transfer code; or (3) incorrect operation of the field equipment.

  8. Coded aperture coherent scatter spectral imaging for assessment of breast cancers: an ex-vivo demonstration

    NASA Astrophysics Data System (ADS)

    Spencer, James R.; Carter, Joshua E.; Leung, Crystal K.; McCall, Shannon J.; Greenberg, Joel A.; Kapadia, Anuj J.

    2017-03-01

    A Coded Aperture Coherent Scatter Spectral Imaging (CACSSI) system was developed in our group to differentiate cancer and healthy tissue in the breast. The utility of the experimental system was previously demonstrated using anthropomorphic breast phantoms and breast biopsy specimens. Here we demonstrate CACSSI utility in identifying tumor margins in real time using breast lumpectomy specimens. Fresh lumpectomy specimens were obtained from Surgical Pathology with the suspected cancerous area designated on the specimen. The specimens were scanned using CACSSI to obtain spectral scatter signatures at multiple locations within the tumor and surrounding tissue. The spectral reconstructions were matched with literature form-factors to classify the tissue as cancerous or non-cancerous. The findings were then compared against pathology reports to confirm the presence and location of the tumor. The system was found to be capable of consistently differentiating cancerous and healthy regions in the breast with spatial resolution of 5 mm. Tissue classification results from the scanned specimens could be correlated with pathology results. We now aim to develop CACSSI as a clinical imaging tool to aid breast cancer assessment and other diagnostic purposes.

  9. MIRACAL: A mission radiation calculation program for analysis of lunar and interplanetary missions

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Striepe, Scott A.; Simonsen, Lisa C.

    1992-01-01

    A computational procedure and data base are developed for manned space exploration missions for which estimates are made for the energetic particle fluences encountered and the resulting dose equivalent incurred. The data base includes the following options: statistical or continuum model for ordinary solar proton events, selection of up to six large proton flare spectra, and galactic cosmic ray fluxes for elemental nuclei of charge numbers 1 through 92. The program requires an input trajectory definition information and specifications of optional parameters, which include desired spectral data and nominal shield thickness. The procedure may be implemented as an independent program or as a subroutine in trajectory codes. This code should be most useful in mission optimization and selection studies for which radiation exposure is of special importance.

  10. Comparative modelling of the spectra of cool giants⋆⋆⋆

    NASA Astrophysics Data System (ADS)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W.; Maldonado, J.; Merle, T.; Peterson, R.; Plez, B.; Short, C. I.; Wahlgren, G. M.; Worley, C.; Aringer, B.; Bladh, S.; de Laverny, P.; Goswami, A.; Mora, A.; Norris, R. P.; Recio-Blanco, A.; Scholz, M.; Thévenin, F.; Tsuji, T.; Kordopatis, G.; Montesinos, B.; Wing, R. F.

    2012-11-01

    Context. Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims: We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods: Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results: We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions: Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are. Based on observations obtained at the Bernard Lyot Telescope (TBL, Pic du Midi, France) of the Midi-Pyrénées Observatory, which is operated by the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France.Tables 6-11 are only available in electronic form at http://www.aanda.orgThe spectra of stars 1 to 4 used in the experiment presented here are only availalbe at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/547/A108

  11. Information theoretic analysis of proprioceptive encoding during finger flexion in the monkey sensorimotor system.

    PubMed

    Witham, Claire L; Baker, Stuart N

    2015-01-01

    There is considerable debate over whether the brain codes information using neural firing rate or the fine-grained structure of spike timing. We investigated this issue in spike discharge recorded from single units in the sensorimotor cortex, deep cerebellar nuclei, and dorsal root ganglia in macaque monkeys trained to perform a finger flexion task. The task required flexion to four different displacements against two opposing torques; the eight possible conditions were randomly interleaved. We used information theory to assess coding of task condition in spike rate, discharge irregularity, and spectral power in the 15- to 25-Hz band during the period of steady holding. All three measures coded task information in all areas tested. Information coding was most often independent between irregularity and 15-25 Hz power (60% of units), moderately redundant between spike rate and irregularity (56% of units redundant), and highly redundant between spike rate and power (93%). Most simultaneously recorded unit pairs coded using the same measure independently (86%). Knowledge of two measures often provided extra information about task, compared with knowledge of only one alone. We conclude that sensorimotor systems use both rate and temporal codes to represent information about a finger movement task. As well as offering insights into neural coding, this work suggests that incorporating spike irregularity into algorithms used for brain-machine interfaces could improve decoding accuracy. Copyright © 2015 the American Physiological Society.

  12. Differential Cross Section Kinematics for 3-dimensional Transport Codes

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Dick, Frank

    2008-01-01

    In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.

  13. Highly accurate calculation of rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Ansorg, M.; Kleinwächter, A.; Meinel, R.

    2002-01-01

    A new spectral code for constructing general-relativistic models of rapidly rotating stars with an unprecedented accuracy is presented. As a first application, we reexamine uniformly rotating homogeneous stars and compare our results with those obtained by several previous codes. Moreover, representative relativistic examples corresponding to highly flattened rotating bodies are given.

  14. Investigation of fast ion pressure effects in ASDEX Upgrade by spectral MSE measurements

    NASA Astrophysics Data System (ADS)

    Reimer, René; Dinklage, Andreas; Wolf, Robert; Dunne, Mike; Geiger, Benedikt; Hobirk, Jörg; Reich, Matthias; ASDEX Upgrade Team; McCarthy, Patrick J.

    2017-04-01

    High precision measurements of fast ion effects on the magnetic equilibrium in the ASDEX Upgrade tokamak have been conducted in a high-power (10 MW) neutral-beam injection discharge. An improved analysis of the spectral motional Stark effect data based on forward-modeling, including the Zeeman effect, fine-structure and non-statistical sub-level distribution, revealed changes in the order of 1% in |B| . The results were found to be consistent with results from the equilibrium solver CLISTE. The measurements allowed us to derive the fast ion pressure fraction to be Δ {{p}\\text{FI}}/{{p}\\text{mhd}}≈ 10 % and variations of the fast ion pressure are consistent with calculations of the transport code TRANSP. The results advance the understanding of fast ion confinement and magneto-hydrodynamic stability in the presence of fast ions.

  15. LAMOST DR1: Stellar Parameters and Chemical Abundances with SP_Ace

    NASA Astrophysics Data System (ADS)

    Boeche, C.; Smith, M. C.; Grebel, E. K.; Zhong, J.; Hou, J. L.; Chen, L.; Stello, D.

    2018-04-01

    We present a new analysis of the LAMOST DR1 survey spectral database performed with the code SP_Ace, which provides the derived stellar parameters {T}{{eff}}, {log}g, [Fe/H], and [α/H] for 1,097,231 stellar objects. We tested the reliability of our results by comparing them to reference results from high spectral resolution surveys. The expected errors can be summarized as ∼120 K in {T}{{eff}}, ∼0.2 in {log}g, ∼0.15 dex in [Fe/H], and ∼0.1 dex in [α/Fe] for spectra with S/N > 40, with some differences between dwarf and giant stars. SP_Ace provides error estimations consistent with the discrepancies observed between derived and reference parameters. Some systematic errors are identified and discussed. The resulting catalog is publicly available at the LAMOST and CDS websites.

  16. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  17. Development of a digital-micromirror-device-based multishot snapshot spectral imaging system.

    PubMed

    Wu, Yuehao; Mirza, Iftekhar O; Arce, Gonzalo R; Prather, Dennis W

    2011-07-15

    We report on the development of a digital-micromirror-device (DMD)-based multishot snapshot spectral imaging (DMD-SSI) system as an alternative to current piezostage-based multishot coded aperture snapshot spectral imager (CASSI) systems. In this system, a DMD is used to implement compressive sensing (CS) measurement patterns for reconstructing the spatial/spectral information of an imaging scene. Based on the CS measurement results, we demonstrated the concurrent reconstruction of 24 spectral images. The DMD-SSI system is versatile in nature as it can be used to implement independent CS measurement patterns in addition to spatially shifted patterns that piezostage-based systems can offer. © 2011 Optical Society of America

  18. Analysis of X-ray and EUV spectra of solar active regions

    NASA Technical Reports Server (NTRS)

    Strong, K. T.; Acton, L. W.

    1979-01-01

    Data acquired by two flights of an array of six Bragg crystal spectrometers on an Aerobee rocket to obtain high spatial and spectral resolution observations of various coronal features at soft X-ray wavelengths (9-23A) were analyzed. The various aspects of the analysis of the X-ray data are described. These observations were coordinated with observations from the experiments on the Apollo Telescope Mount and the various data sets were related to one another. The Appendices contain the published results, abstracts of papers, computer code descriptions and preprints of papers, all produced as a result of this research project.

  19. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech.

    PubMed

    Dilley, Laura C; Wieland, Elizabeth A; Gamache, Jessica L; McAuley, J Devin; Redford, Melissa A

    2013-02-01

    As children mature, changes in voice spectral characteristics co-vary with changes in speech, language, and behavior. In this study, spectral characteristics were manipulated to alter the perceived ages of talkers' voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Speech was modified by lowering formants and fundamental frequency, for 5-year-old children's utterances, or raising them, for adult caregivers' utterances. Next, participants differing in awareness of the manipulation (Experiment 1A) or amount of speech-language training (Experiment 1B) made judgments of prosodic, segmental, and talker attributes. Experiment 2 investigated the effects of spectral modification on intelligibility. Finally, in Experiment 3, trained analysts used formal prosody coding to assess prosodic characteristics of spectrally modified and unmodified speech. Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work.

  20. Free Carrier Induced Spectral Shift for GaAs Filled Metallic Hole Arrays

    DTIC Science & Technology

    2012-03-13

    Bahae , G. I . Stegeman, K. Al-hemyari, J. S. Aitchison, and C. N. Ironside, “Limitation due to three-photon absorption on the useful spectral range...Free carrier induced spectral shift for GaAs filled metallic hole arrays Jingyu Zhang 1,2,* , Bin Xiang 3 , Mansoor Sheik- Bahae 4 , and S. R. J...OCIS codes: (310.6628) Subwavelength structures;(190.4350) Nonlinear optics at surfaces References and links 1. J. M. Luther, P. K. I . Jain, T. Ewers

  1. A System for Compressive Spectral and Polarization Imaging at Short Wave Infrared (SWIR) Wavelengths

    DTIC Science & Technology

    2017-10-18

    2016). H. Rueda, H. Arguello and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging”, Journal...3)  a  set  of   optical   filters  which   allow   to   discriminate   spectrally   the   coded   and   sheared...system   that   includes   objective   lens,   spatial   light   modulator,   dispersive   element,   optical   filters

  2. Space Weathering of Silicate Asteroids: An Observational Investigation

    NASA Astrophysics Data System (ADS)

    MacLennan, Eric M.; Emery, Joshua; Lindsay, Sean S.

    2017-10-01

    Solar wind exposure and micrometeoroid bombardment are known to cause mineralogical changes in the upper few microns of silicate grains (by forming amorphous “composition” rims with embedded nano-phase Fe0). These processes, jointly called space weathering (SW), affect the light-scattering properties and subsequently the geometric albedo and spectral parameters (spectral slope and band depth). Earth’s Moon exhibits the well known “lunar-style” of SW: albedo decrease, spectral slope increase, and absorption band suppression. However, space mission images of (243) Ida and (433) Eros suggest that different SW “styles” exist among the silicate-bearing (olivine and pyroxene) S-complex asteroids, which exhibit diagnostic absorption features near 1 & 2 μm. While Eros generally shows only albedo differences between younger and older locations, Ida’s surface only shows changes in spectral slope and band depth. It is not clear if these SW styles are unique to Ida and Eros or if they can be observed throughout the entire asteroid population.We hypothesize that the SW styles seen on Eros and Ida also exist on other asteroid surfaces. Additionally, we hypothesize that increased solar wind exposure, smaller regolith particles, higher olivine abundance, and older asteroid surfaces will increase the observed degree of SW. Our dataset includes publicly available Visible (0.4-0.8 μm) and Near Infrared (~0.7-2.5 μm) reflectance spectra of silicate-bearing asteroids (those with 1 & 2 μm bands) from the PDS and the SMASS, S3OS2 and MIT-UH-IRTF spectral surveys. We have also conducted a spectral survey with the IRTF/SpeX targeting 52 silicate asteroids for which we have constraints for regolith grain sizes from interpretation of thermal-IR data. The relevant band parameters to SW and to interpreting mineralogical properties are calculated using the band analysis code, SARA. Geometric albedos are calculated using thermal-IR data from WISE/NEOWISE. Using these derived parameters, we search for potential SW styles among different spectral classes and for correlations with the factors listed above. Analysis on a subset of S-types suggests that heliocentric distance correlates with spectral slope and band depth but not albedo.

  3. Nmrglue: an open source Python package for the analysis of multidimensional NMR data.

    PubMed

    Helmus, Jonathan J; Jaroniec, Christopher P

    2013-04-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license.

  4. Nmrglue: An Open Source Python Package for the Analysis of Multidimensional NMR Data

    PubMed Central

    Helmus, Jonathan J.; Jaroniec, Christopher P.

    2013-01-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license. PMID:23456039

  5. In-flight radiometric calibration of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

    NASA Technical Reports Server (NTRS)

    Conel, James E.; Green, Robert O.; Alley, Ronald E.; Bruegge, Carol J.; Carrere, Veronique; Margolis, Jack S.; Vane, Gregg; Chrien, Thomas G.; Slater, Philip N.; Biggard, Stuart F.

    1988-01-01

    A reflectance-based method was used to provide an analysis of the in-flight radiometric performance of AVIRIS. Field spectral reflectance measurements of the surface and extinction measurements of the atmosphere using solar radiation were used as input to atmospheric radiative transfer calculations. Five separate codes were used in the analysis. Four include multiple scattering, and the computed radiances from these for flight conditions were in good agreement. Code-generated radiances were compared with AVIRIS-predicted radiances based on two laboratory calibrations (pre- and post-season of flight) for a uniform highly reflecting natural dry lake target. For one spectrometer (C), the pre- and post-season calibration factors were found to give identical results, and to be in agreement with the atmospheric models that include multiple scattering. This positive result validates the field and laboratory calibration technique. Results for the other spectrometers (A, B and D) were widely at variance with the models no matter which calibration factors were used. Potential causes of these discrepancies are discussed.

  6. Keno-Nr a Monte Carlo Code Simulating the Californium -252-SOURCE-DRIVEN Noise Analysis Experimental Method for Determining Subcriticality

    NASA Astrophysics Data System (ADS)

    Ficaro, Edward Patrick

    The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of uranyl nitrate. It is believed that the response of these detectors is not well known and is incorrectly modeled in KENO-NR. In addition to these tests, several benchmark calculations were also performed to provide insight into the properties of the point kinetic formulation.

  7. Precision Stellar Characterization of FGKM Stars using an Empirical Spectral Library

    NASA Astrophysics Data System (ADS)

    Yee, Samuel W.; Petigura, Erik A.; von Braun, Kaspar

    2017-02-01

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution (R ≈ 60,000), high signal-to-noise ratio (S/N ≈ 150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measure stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ˜M5-F1 (T eff ≈ 3000-7000 K, R ⋆ ≈ 0.1-16 R ⊙). We also present “Empirical SpecMatch” (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100 K in effective temperature (T eff), 15% in stellar radius (R ⋆), and 0.09 dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ˜K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70 K in T eff, 10% in R ⋆, and 0.12 dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available.

  8. SSME Condition Monitoring Using Neural Networks and Plume Spectral Signatures

    NASA Technical Reports Server (NTRS)

    Hopkins, Randall; Benzing, Daniel

    1996-01-01

    For a variety of reasons, condition monitoring of the Space Shuttle Main Engine (SSME) has become an important concern for both ground tests and in-flight operation. The complexities of the SSME suggest that active, real-time condition monitoring should be performed to avoid large-scale or catastrophic failure of the engine. In 1986, the SSME became the subject of a plume emission spectroscopy project at NASA's Marshall Space Flight Center (MSFC). Since then, plume emission spectroscopy has recorded many nominal tests and the qualitative spectral features of the SSME plume are now well established. Significant discoveries made with both wide-band and narrow-band plume emission spectroscopy systems led MSFC to develop the Optical Plume Anomaly Detection (OPAD) system. The OPAD system is designed to provide condition monitoring of the SSME during ground-level testing. The operational health of the engine is achieved through the acquisition of spectrally resolved plume emissions and the subsequent identification of abnormal emission levels in the plume indicative of engine erosion or component failure. Eventually, OPAD, or a derivative of the technology, could find its way on to an actual space vehicle and provide in-flight engine condition monitoring. This technology step, however, will require miniaturized hardware capable of processing plume spectral data in real-time. An objective of OPAD condition monitoring is to determine how much of an element is present in the SSME plume. The basic premise is that by knowing the element and its concentration, this could be related back to the health of components within the engine. For example, an abnormal amount of silver in the plume might signify increased wear or deterioration of a particular bearing in the engine. Once an anomaly is identified, the engine could be shut down before catastrophic failure occurs. Currently, element concentrations in the plume are determined iteratively with the help of a non-linear computer code called SPECTRA, developed at the USAF Arnold Engineering Development Center. Ostensibly, the code produces intensity versus wavelength plots (i.e., spectra) when inputs such as element concentrations, reaction temperature, and reaction pressure are provided. However, in order to provide a higher-level analysis, element concentration is not specified explicitly as an input. Instead, two quantum variables, number density and broadening parameter, are used. Past experience with OPAD data analysis has revealed that the region of primary interest in any SSME plume spectrum lies in the wavelength band of 3300 A to 4330 A. Experience has also revealed that some elements, such as iron, cobalt and nickel, cause multiple peaks over the chosen wavelength range whereas other elements (magnesium, for example) have a few, relatively isolated peaks in the chosen wavelength range. Iteration with SPECTRA as a part of OPAD data analysis is an incredibly labor intensive task and not one to be performed by hand. What is really needed is the "inverse" of the computer code but the mathematical model for the inverse mapping is tenuous at best. However, building generalized models based upon known input/output mappings while ignoring details of the governing physical model is possible using neural networks. Thus the objective of the research project described herein was to quickly and accurately predict combustion temperature and element concentrations (i.e., number density and broadening parameter) from a given spectrum using a neural network. In other words, a neural network had to be developed that would provide a generalized "inverse" of the computer code SPECTRA.

  9. Herschel observations of extraordinary sources: Analysis of the HIFI 1.2 THz wide spectral survey toward orion KL. I. method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crockett, Nathan R.; Bergin, Edwin A.; Neill, Justin L.

    2014-06-01

    We present a comprehensive analysis of a broadband spectral line survey of the Orion Kleinmann-Low nebula (Orion KL), one of the most chemically rich regions in the Galaxy, using the HIFI instrument on board the Herschel Space Observatory. This survey spans a frequency range from 480 to 1907 GHz at a resolution of 1.1 MHz. These observations thus encompass the largest spectral coverage ever obtained toward this high-mass star-forming region in the submillimeter with high spectral resolution and include frequencies >1 THz, where the Earth's atmosphere prevents observations from the ground. In all, we detect emission from 39 molecules (79more » isotopologues). Combining this data set with ground-based millimeter spectroscopy obtained with the IRAM 30 m telescope, we model the molecular emission from the millimeter to the far-IR using the XCLASS program, which assumes local thermodynamic equilibrium (LTE). Several molecules are also modeled with the MADEX non-LTE code. Because of the wide frequency coverage, our models are constrained by transitions over an unprecedented range in excitation energy. A reduced χ{sup 2} analysis indicates that models for most species reproduce the observed emission well. In particular, most complex organics are well fit by LTE implying gas densities are high (>10{sup 6} cm{sup –3}) and excitation temperatures and column densities are well constrained. Molecular abundances are computed using H{sub 2} column densities also derived from the HIFI survey. The distribution of rotation temperatures, T {sub rot}, for molecules detected toward the hot core is significantly wider than the compact ridge, plateau, and extended ridge T {sub rot} distributions, indicating the hot core has the most complex thermal structure.« less

  10. Theoretical study on electronic excitation spectra: A matrix form of numerical algorithm for spectral shift

    NASA Astrophysics Data System (ADS)

    Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan

    2017-07-01

    In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.

  11. Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry

    NASA Technical Reports Server (NTRS)

    Brandis, A. M.; Cruden, B. A.

    2017-01-01

    Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.

  12. Validation of High Speed Earth Atmospheric Entry Radiative Heating from 9.5 to 15.5 km/s

    NASA Technical Reports Server (NTRS)

    Brandis, A. M.; Johnston, C. O.; Cruden, B. A.; Prabhu, D. K.

    2016-01-01

    This paper presents an overview of the analysis and measurements of equilibrium radiation obtained in the NASA Ames Research Center's Electric Arc Shock Tube (EAST) facility as a part of recent testing aimed at reaching shock velocities up to 15.5 km/s. The goal of these experiments was to measure the level of radiation encountered during high speed Earth entry conditions, such as would be relevant for an asteroid, inter-planetary or lunar return mission. These experiments provide the first spectrally and spatially resolved data for high speed Earth entry and cover conditions ranging from 9.5 to 15.5 km/s at 13.3 and 26.6 Pa (0.1 and 0.2 Torr). The present analysis endeavors to provide a validation of shock tube radiation measurements and simulations at high speed conditions. A comprehensive comparison between the spectrally resolved absolute equilibrium radiance measured in EAST and the predictive tools, NEQAIR and HARA, is presented. In order to provide a more accurate representation of the agreement between the experimental and simulation results, the integrated value of radiance has been compared across four spectral regions (VUV, UV/Vis, Vis/NIR and IR) as a function of velocity. Results have generally shown excellent agreement between the two codes and EAST data for the Vis through IR spectral regions, however, discrepancies have been identified in the VUV and parts of the UV spectral regions. As a result of the analysis presented in this paper, an updated parametric uncertainty for high speed radiation in air has been evaluated to be [9.0%, -6.3%]. Furthermore, due to the nature of the radiating environment at these high shock speeds, initial calculations aimed at modeling phenomena that become more significant with increasing shock speed have been performed. These phenomena include analyzing the radiating species emitting ahead of the shock and the increased significance of radiative cooling mechanisms.

  13. DCOMP Award Lecture (Metropolis): A 3D Spectral Anelastic Hydrodynamic Code for Shearing, Stratified Flows

    NASA Astrophysics Data System (ADS)

    Barranco, Joseph

    2006-03-01

    We have developed a three-dimensional (3D) spectral hydrodynamic code to study vortex dynamics in rotating, shearing, stratified systems (eg, the atmosphere of gas giant planets, protoplanetary disks around newly forming protostars). The time-independent background state is stably stratified in the vertical direction and has a unidirectional linear shear flow aligned with one horizontal axis. Superposed on this background state is an unsteady, subsonic flow that is evolved with the Euler equations subject to the anelastic approximation to filter acoustic phenomena. A Fourier-Fourier basis in a set of quasi-Lagrangian coordinates that advect with the background shear is used for spectral expansions in the two horizontal directions. For the vertical direction, two different sets of basis functions have been implemented: (1) Chebyshev polynomials on a truncated, finite domain, and (2) rational Chebyshev functions on an infinite domain. Use of this latter set is equivalent to transforming the infinite domain to a finite one with a cotangent mapping, and using cosine and sine expansions in the mapped coordinate. The nonlinear advection terms are time integrated explicitly, whereas the Coriolis force, buoyancy terms, and pressure/enthalpy gradient are integrated semi- implicitly. We show that internal gravity waves can be damped by adding new terms to the Euler equations. The code exhibits excellent parallel performance with the Message Passing Interface (MPI). As a demonstration of the code, we simulate vortex dynamics in protoplanetary disks and the Kelvin-Helmholtz instability in the dusty midplanes of protoplanetary disks.

  14. A 3D spectral anelastic hydrodynamic code for shearing, stratified flows

    NASA Astrophysics Data System (ADS)

    Barranco, Joseph A.; Marcus, Philip S.

    2006-11-01

    We have developed a three-dimensional (3D) spectral hydrodynamic code to study vortex dynamics in rotating, shearing, stratified systems (e.g., the atmosphere of gas giant planets, protoplanetary disks around newly forming protostars). The time-independent background state is stably stratified in the vertical direction and has a unidirectional linear shear flow aligned with one horizontal axis. Superposed on this background state is an unsteady, subsonic flow that is evolved with the Euler equations subject to the anelastic approximation to filter acoustic phenomena. A Fourier Fourier basis in a set of quasi-Lagrangian coordinates that advect with the background shear is used for spectral expansions in the two horizontal directions. For the vertical direction, two different sets of basis functions have been implemented: (1) Chebyshev polynomials on a truncated, finite domain, and (2) rational Chebyshev functions on an infinite domain. Use of this latter set is equivalent to transforming the infinite domain to a finite one with a cotangent mapping, and using cosine and sine expansions in the mapped coordinate. The nonlinear advection terms are time-integrated explicitly, the pressure/enthalpy terms are integrated semi-implicitly, and the Coriolis force and buoyancy terms are treated semi-analytically. We show that internal gravity waves can be damped by adding new terms to the Euler equations. The code exhibits excellent parallel performance with the message passing interface (MPI). As a demonstration of the code, we simulate the merger of two 3D vortices in the midplane of a protoplanetary disk.

  15. seismo-live: Training in Computational Seismology using Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Igel, H.; Krischer, L.; van Driel, M.; Tape, C.

    2016-12-01

    Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation technologies in research projects. At the same time well-engineered community codes make it easy to return simulation-based results yet with the danger that the inherent traps of numerical solutions are not well understood. It is our belief that training with highly simplified numerical solutions (here to the equations describing elastic wave propagation) with carefully chosen elementary ingredients of simulation technologies (e.g., finite-differencing, function interpolation, spectral derivatives, numerical integration) could substantially improve this situation. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without and necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations with interactive, executable python codes. We demonstrate the potential with training notebooks for the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin method. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing and noise analysis. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas.

  16. Re-evaluation and updating of the seismic hazard of Lebanon

    NASA Astrophysics Data System (ADS)

    Huijer, Carla; Harajli, Mohamed; Sadek, Salah

    2016-01-01

    This paper presents the results of a study undertaken to evaluate the implications of the newly mapped offshore Mount Lebanon Thrust (MLT) fault system on the seismic hazard of Lebanon and the current seismic zoning and design parameters used by the local engineering community. This re-evaluation is critical, given that the MLT is located at close proximity to the major cities and economic centers of the country. The updated seismic hazard was assessed using probabilistic methods of analysis. The potential sources of seismic activities that affect Lebanon were integrated along with any/all newly established characteristics within an updated database which includes the newly mapped fault system. The earthquake recurrence relationships of these sources were developed from instrumental seismology data, historical records, and earlier studies undertaken to evaluate the seismic hazard of neighboring countries. Maps of peak ground acceleration contours, based on 10 % probability of exceedance in 50 years (as per Uniform Building Code (UBC) 1997), as well as 0.2 and 1 s peak spectral acceleration contours, based on 2 % probability of exceedance in 50 years (as per International Building Code (IBC) 2012), were also developed. Finally, spectral charts for the main coastal cities of Beirut, Tripoli, Jounieh, Byblos, Saida, and Tyre are provided for use by designers.

  17. Quantitative analysis of vacuum-ultraviolet radiation from nanosecond laser-zinc interaction

    NASA Astrophysics Data System (ADS)

    Parchamy, Homaira; Szilagyi, John; Masnavi, Majid; Richardson, Martin

    2018-07-01

    The paper reports measurements of the vacuum-ultraviolet spectral irradiances of a flat zinc target over a wavelength region of 124-164 nm generated by 10 and 60 ns duration low-intensities, 5 ×109 - 3 ×1010 W cm-2, 1.06 μm wavelength laser pulses. Maximum radiation conversion efficiencies of 2.5%/2πsr and 0.8%/2πsr were measured for 60 and 10 ns laser pulses at the intensities of 5 ×109 and 1.4 ×1010 W cm-2, respectively. Atomic structure calculations using a relativistic configuration-interaction, flexible atomic code and a developed non-local thermodynamic equilibrium population kinetics model in comparison to the experimental spectra detected by the Seya-Namioka type monochromator reveal the strong broadband experimental emission originates mainly from 3d94p-3d94s, 3d94d-3d94p and 3d84p-3d84s, 3d84d-3d84p unresolved-transition arrays of double and triple ionized zinc, respectively. Two-dimensional radiation-hydrodynamics code is used to investigate time-space plasma evolution and spectral radiation of a 10 ns full-width-at-half-maximum Gaussian laser pulse-zinc interaction.

  18. Implementation and Testing of Turbulence Models for the F18-HARV Simulation

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1998-01-01

    This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.

  19. Code for the calculation of the instrumental profile: preliminary results. (Spanish Title: Código para el cálculo del perfil instrumental: resultados preliminares)

    NASA Astrophysics Data System (ADS)

    Pintado, O. I.; Santillán, L.; Marquetti, M. E.

    All images obtained with a telescope are distorted by the instrument. This distorsion is known as instrumental profile or instrumental broadening. The deformations in the spectra could introduce large errors in the determination of different parameters, especially in those dependent on the spectral lines shapes, such as chemical abundances, winds, microturbulence, etc. To correct this distortion, in some cases, the spectral lines are convolved with a Gaussian function and in others the lines are widened with a fixed value. Some codes used to calculate synthetic spectra, as SYNTHE, include this corrections. We present results obtained for the spectrograph REOSC and EBASIM of CASLEO.

  20. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.

  1. Pixel Statistical Analysis of Diabetic vs. Non-diabetic Foot-Sole Spectral Terahertz Reflection Images

    NASA Astrophysics Data System (ADS)

    Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.

    2018-03-01

    In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.

  2. Fast-ion D(alpha) measurements and simulations in DIII-D

    NASA Astrophysics Data System (ADS)

    Luo, Yadong

    The fast-ion Dalpha diagnostic measures the Doppler-shifted Dalpha light emitted by neutralized fast ions. For a favorable viewing geometry, the bright interferences from beam neutrals, halo neutrals, and edge neutrals span over a small wavelength range around the Dalpha rest wavelength and are blocked by a vertical bar at the exit focal plane of the spectrometer. Background subtraction and fitting techniques eliminate various contaminants in the spectrum. Fast-ion data are acquired with a time evolution of ˜1 ms, spatial resolution of ˜5 cm, and energy resolution of ˜10 keV. A weighted Monte Carlo simulation code models the fast-ion Dalpha spectra based on the fast-ion distribution function from other sources. In quiet plasmas, the spectral shape is in excellent agreement and absolute magnitude also has reasonable agreement. The fast-ion D alpha signal has the expected dependencies on plasma and neutral beam parameters. The neutral particle diagnostic and neutron diagnostic corroborate the fast-ion Dalpha measurements. The relative spatial profile is in agreement with the simulated profile based on the fast-ion distribution function from the TRANSP analysis code. During ion cyclotron heating, fast ions with high perpendicular energy are accelerated, while those with low perpendicular energy are barely affected. The spatial profile is compared with the simulated profiles based on the fast-ion distribution functions from the CQL Fokker-Planck code. In discharges with Alfven instabilities, both the spatial profile and spectral shape suggests that fast ions are redistributed. The flattened fast-ion Dalpha profile is in agreement with the fast-ion pressure profile.

  3. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  4. Aeras: A next generation global atmosphere model

    DOE PAGES

    Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...

    2015-06-01

    Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less

  5. The torsional barriers of two equivalent methyl internal rotations in 2,5-dimethylfuran investigated by microwave spectroscopy

    NASA Astrophysics Data System (ADS)

    Van, Vinh; Bruckhuisen, Jonas; Stahl, Wolfgang; Ilyushin, Vadim; Nguyen, Ha Vinh Lam

    2018-01-01

    The microwave spectrum of 2,5-dimethylfuran was recorded using two pulsed molecular jet Fourier transform microwave spectrometers which cover the frequency range from 2 to 40 GHz. The internal rotations of two equivalent methyl tops with a barrier height of approximately 439.15 cm-1 introduce torsional splittings of all rotational transitions in the spectrum. For the spectral analysis, two different computer programs were applied and compared, the PAM-C2v-2tops code based on the principal axis method which treats several torsional states simultaneously, and the XIAM code based on the combined axis method, yielding accurate molecular parameters. The experimental work was supplemented by quantum chemical calculations. Two-dimensional potential energy surfaces depending on the torsional angles of both methyl groups were calculated and parametrized.

  6. Hyperspectral IASI L1C Data Compression.

    PubMed

    García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan

    2017-06-16

    The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.

  7. Viriato: a Fourier-Hermite spectral code for strongly magnetised fluid-kinetic plasma dynamics

    NASA Astrophysics Data System (ADS)

    Loureiro, Nuno; Dorland, William; Fazendeiro, Luis; Kanekar, Anjor; Mallet, Alfred; Zocco, Alessandro

    2015-11-01

    We report on the algorithms and numerical methods used in Viriato, a novel fluid-kinetic code that solves two distinct sets of equations: (i) the Kinetic Reduced Electron Heating Model equations [Zocco & Schekochihin, 2011] and (ii) the kinetic reduced MHD (KRMHD) equations [Schekochihin et al., 2009]. Two main applications of these equations are magnetised (Alfvnénic) plasma turbulence and magnetic reconnection. Viriato uses operator splitting to separate the dynamics parallel and perpendicular to the ambient magnetic field (assumed strong). Along the magnetic field, Viriato allows for either a second-order accurate MacCormack method or, for higher accuracy, a spectral-like scheme. Perpendicular to the field Viriato is pseudo-spectral, and the time integration is performed by means of an iterative predictor-corrector scheme. In addition, a distinctive feature of Viriato is its spectral representation of the parallel velocity-space dependence, achieved by means of a Hermite representation of the perturbed distribution function. A series of linear and nonlinear benchmarks and tests are presented, with focus on 3D decaying kinetic turbulence. Work partially supported by Fundação para a Ciência e Tecnologia via Grants UID/FIS/50010/2013 and IF/00530/2013.

  8. Photoreceptor sectral sensitivities in terrestrial animals: adaptations for luminance and colour vision

    PubMed Central

    Osorio, D; Vorobyev, M

    2005-01-01

    This review outlines how eyes of terrestrial vertebrates and insects meet the competing requirements of coding both spatial and spectral information. There is no unique solution to this problem. Thus, mammals and honeybees use their long-wavelength receptors for both achromatic (luminance) and colour vision, whereas flies and birds probably use separate sets of photoreceptors for the two purposes. In particular, we look at spectral tuning and diversification among ‘long-wavelength’ receptors (sensitivity maxima at greater than 500 nm), which play a primary role in luminance vision. Data on spectral sensitivities and phylogeny of visual photopigments can be incorporated into theoretical models to suggest how eyes are adapted to coding natural stimuli. Models indicate, for example, that animal colour vision—involving five or fewer broadly tuned receptors—is well matched to most natural spectra. We can also predict that the particular objects of interest and signal-to-noise ratios will affect the optimal eye design. Nonetheless, it remains difficult to account for the adaptive significance of features such as co-expression of photopigments in single receptors, variation in spectral sensitivities of mammalian L-cone pigments and the diversification of long-wavelength receptors that has occurred in several terrestrial lineages. PMID:16096084

  9. Simulated Raman Spectral Analysis of Organic Molecules

    NASA Astrophysics Data System (ADS)

    Lu, Lu

    The advent of the laser technology in the 1960s solved the main difficulty of Raman spectroscopy, resulted in simplified Raman spectroscopy instruments and also boosted the sensitivity of the technique. Up till now, Raman spectroscopy is commonly used in chemistry and biology. As vibrational information is specific to the chemical bonds, Raman spectroscopy provides fingerprints to identify the type of molecules in the sample. In this thesis, we simulate the Raman Spectrum of organic and inorganic materials by General Atomic and Molecular Electronic Structure System (GAMESS) and Gaussian, two computational codes that perform several general chemistry calculations. We run these codes on our CPU-based high-performance cluster (HPC). Through the message passing interface (MPI), a standardized and portable message-passing system which can make the codes run in parallel, we are able to decrease the amount of time for computation and increase the sizes and capacities of systems simulated by the codes. From our simulations, we will set up a database that allows search algorithm to quickly identify N-H and O-H bonds in different materials. Our ultimate goal is to analyze and identify the spectra of organic matter compositions from meteorites and compared these spectra with terrestrial biologically-produced amino acids and residues.

  10. Performance Analysis of OCDMA Based on AND Detection in FTTH Access Network Using PIN & APD Photodiodes

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.

    2011-06-01

    In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.

  11. Analysis Tools for the Ion Cyclotron Emission Diagnostic on DIII-D

    NASA Astrophysics Data System (ADS)

    Del Castillo, C. A.; Thome, K. E.; Pinsker, R. I.; Meneghini, O.; Pace, D. C.

    2017-10-01

    Ion cyclotron emission (ICE) waves are excited by suprathermal particles such as neutral beam particles and fusion products. An ICE diagnostic is in consideration for use at ITER, where it could provide important passive measurement of fast ions location and losses, which are otherwise difficult to determine. Simple ICE data analysis codes had previously been developed, but more sophisticated codes are required to facilitate data analysis. Several terabytes of ICE data were collected on DIII-D during the 2015-2017 campaign. The ICE diagnostic consists of antenna straps and dedicated magnetic probes that are both digitized at 200 MHz. A suite of Python spectral analysis tools within the OMFIT framework is under development to perform the memory-intensive analysis of this data. A fast and optimized analysis allows ready access to data visualizations as spectrograms and as plots of both frequency and time cuts of the data. A database of processed ICE data is being constructed to understand the relationship between the frequency and intensity of ICE and a variety of experimental parameters including neutral beam power and geometry, local and global plasma parameters, magnetic fields, and many others. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER54698.

  12. Stark broadening parameters and transition probabilities of persistent lines of Tl II

    NASA Astrophysics Data System (ADS)

    de Andrés-García, I.; Colón, C.; Fernández-Martínez, F.

    2018-05-01

    The presence of singly ionized thallium in the stellar atmosphere of the chemically peculiar star χ Lupi was reported by Leckrone et al. in 1999 by analysis of its stellar spectrum obtained with the Goddard High Resolution Spectrograph (GHRS) on board the Hubble Space Telescope. Atomic data about the spectral line of 1307.50 Å and about the hyperfine components of the spectral lines of 1321.71 Å and 1908.64 Å were taken from different sources and used to analyse the isotopic abundance of thallium II in the star χ Lupi. From their results the authors concluded that the photosphere of the star presents an anomalous isotopic composition of Tl II. A study of the atomic parameters of Tl II and of the broadening by the Stark effect of its spectral lines (and therefore of the possible overlaps of these lines) can help to clarify the conclusions about the spectral abundance of Tl II in different stars. In this paper we present calculated values of the atomic transition probabilities and Stark broadening parameters for 49 spectral lines of Tl II obtained by using the Cowan code including core polarization effects and the Griem semiempirical approach. Theoretical values of radiative lifetimes for 11 levels (eight with experimental values in the bibliography) are calculated and compared with the experimental values in order to test the quality of our results. Theoretical trends of the Stark width and shift parameters versus the temperature for spectral lines of astrophysical interest are displayed. Trends of our calculated Stark width for the isoelectronic sequence Tl II-Pb III-Bi IV are also displayed.

  13. Peptide library synthesis on spectrally encoded beads for multiplexed protein/peptide bioassays

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy Q.; Brower, Kara; Harink, Björn; Baxter, Brian; Thorn, Kurt S.; Fordyce, Polly M.

    2017-02-01

    Protein-peptide interactions are essential for cellular responses. Despite their importance, these interactions remain largely uncharacterized due to experimental challenges associated with their measurement. Current techniques (e.g. surface plasmon resonance, fluorescence polarization, and isothermal calorimetry) either require large amounts of purified material or direct fluorescent labeling, making high-throughput measurements laborious and expensive. In this report, we present a new technology for measuring antibody-peptide interactions in vitro that leverages spectrally encoded beads for biological multiplexing. Specific peptide sequences are synthesized directly on encoded beads with a 1:1 relationship between peptide sequence and embedded code, thereby making it possible to track many peptide sequences throughout the course of an experiment within a single small volume. We demonstrate the potential of these bead-bound peptide libraries by: (1) creating a set of 46 peptides composed of 3 commonly used epitope tags (myc, FLAG, and HA) and single amino-acid scanning mutants; (2) incubating with a mixture of fluorescently-labeled antimyc, anti-FLAG, and anti-HA antibodies; and (3) imaging these bead-bound libraries to simultaneously identify the embedded spectral code (and thus the sequence of the associated peptide) and quantify the amount of each antibody bound. To our knowledge, these data demonstrate the first customized peptide library synthesized directly on spectrally encoded beads. While the implementation of the technology provided here is a high-affinity antibody/protein interaction with a small code space, we believe this platform can be broadly applicable to any range of peptide screening applications, with the capability to multiplex into libraries of hundreds to thousands of peptides in a single assay.

  14. Interrelating meteorite and asteroid spectra at UV-Vis-NIR wavelengths using novel multiple-scattering methods

    NASA Astrophysics Data System (ADS)

    Martikainen, Julia; Penttilä, Antti; Gritsevich, Maria; Muinonen, Karri

    2017-10-01

    Asteroids have remained mostly the same for the past 4.5 billion years, and provide us information on the origin, evolution and current state of the Solar System. Asteroids and meteorites can be linked by matching their respective reflectance spectra. This is difficult, because spectral features depend strongly on the surface properties, and meteorite surfaces are free of regolith dust present in asteroids. Furthermore, asteroid surfaces experience space weathering which affects their spectral features.We present a novel simulation framework for assessing the spectral properties of meteorites and asteroids and matching their reflectance spectra. The simulations are carried out by utilizing a light-scattering code that takes inhomogeneous waves into account and simulates light scattering by Gaussian-random-sphere particles large compared to the wavelength of the incident light. The code uses incoherent input and computes phase matrices by utilizing incoherent scattering matrices. Reflectance spectra are modeled by combining olivine, pyroxene, and iron, the most common materials that dominate the spectral features of asteroids and meteorites. Space weathering is taken into account by adding nanoiron into the modeled asteroid spectrum. The complex refractive indices needed for the simulations are obtained from existing databases, or derived using an optimization that utilizes our ray-optics code and the measured spectrum of the material.We demonstrate our approach by applying it to the reflectance spectrum of (4) Vesta and the reflectance spectrum of the Johnstown meteorite measured with the University of Helsinki integrating-sphere UV-Vis-NIR spectrometer.Acknowledgments. The research is funded by the ERC Advanced Grant No. 320773 (SAEMPL).

  15. VizieR Online Data Catalog: LAMOST-Kepler MKCLASS spectral classification (Gray+, 2016)

    NASA Astrophysics Data System (ADS)

    Gray, R. O.; Corbally, C. J.; De Cat, P.; Fu, J. N.; Ren, A. B.; Shi, J. R.; Luo, A. L.; Zhang, H. T.; Wu, Y.; Cao, Z.; Li, G.; Zhang, Y.; Hou, Y.; Wang, Y.

    2016-07-01

    The data for the LAMOST-Kepler project are supplied by the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST, also known as the Guo Shou Jing Telescope). This unique astronomical instrument is located at the Xinglong observatory in China, and combines a large aperture (4 m) telescope with a 5° circular field of view (Wang et al. 1996ApOpt..35.5155W). Our role in this project is to supply accurate two-dimensional spectral types for the observed targets. The large number of spectra obtained for this project (101086) makes traditional visual classification techniques impractical, so we have utilized the MKCLASS code to perform these classifications. The MKCLASS code (Gray & Corbally 2014AJ....147...80G, v1.07 http://www.appstate.edu/~grayro/mkclass/), an expert system designed to classify blue-violet spectra on the MK Classification system, was employed to produce the spectral classifications reported in this paper. MKCLASS was designed to reproduce the steps skilled human classifiers employ in the classification process. (2 data files).

  16. Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.

    PubMed

    McGregor, T J; Spence, D J; Coutts, D W

    2008-01-01

    We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.

  17. Advanced Spectral Modeling Development

    DTIC Science & Technology

    1992-09-14

    above, the AFGL line-by-line code already possesses many of the attributes desired of a generally applicable transmittance/radiance simulation code, it...transmittance calculations, (b) perform generalized multiple scattering calculations, (c) calculate both heating and dissociative fluxes, (d) provide...This report is subdivided into task specific subsections. The following section describes our general approach to address these technical issues (Section

  18. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  19. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech

    PubMed Central

    Dilley, Laura C.; Wieland, Elizabeth A.; Gamache, Jessica L.; McAuley, J. Devin; Redford, Melissa A.

    2013-01-01

    Purpose As children mature, changes in voice spectral characteristics covary with changes in speech, language, and behavior. Spectral characteristics were manipulated to alter the perceived ages of talkers’ voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Method Speech was modified by lowering formants and fundamental frequency, for 5-year-old children’s utterances, or raising them, for adult caregivers’ utterances. Next, participants differing in awareness of the manipulation (Exp. 1a) or amount of speech-language training (Exp. 1b) made judgments of prosodic, segmental, and talker attributes. Exp. 2 investigated the effects of spectral modification on intelligibility. Finally, in Exp. 3 trained analysts used formal prosody coding to assess prosodic characteristics of spectrally-modified and unmodified speech. Results Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Conclusions Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work. PMID:23275414

  20. Digital signal processing of the phonocardiogram: review of the most recent advancements.

    PubMed

    Durand, L G; Pibarot, P

    1995-01-01

    The objective of the present paper is to provide a detailed review of the most recent developments in instrumentation and signal processing of digital phonocardiography and heart auscultation. After a short introduction, the paper presents a brief history of heart auscultation and phonocardiography, which is followed by a summary of the basic theories and controversies regarding the genesis of the heart sounds. The application of spectral analysis and the potential of new time-frequency representations and cardiac acoustic mapping to resolve the controversies and better understand the genesis and transmission of heart sounds and murmurs within the heart-thorax acoustic system are reviewed. The most recent developments in the application of linear predictive coding, spectral analysis, time-frequency representation techniques, and pattern recognition for the detection and follow-up of native and prosthetic valve degeneration and dysfunction are also presented in detail. New areas of research and clinical applications and areas of potential future developments are then highlighted. The final section is a discussion about a multidegree of freedom theory on the origin of the heart sounds and murmurs, which is completed by the authors' conclusion.

  1. Design and construction of an Offner spectrometer based on geometrical analysis of ring fields.

    PubMed

    Kim, Seo Hyun; Kong, Hong Jin; Lee, Jong Ung; Lee, Jun Ho; Lee, Jai Hoon

    2014-08-01

    A method to obtain an aberration-corrected Offner spectrometer without ray obstruction is proposed. A new, more efficient spectrometer optics design is suggested in order to increase its spectral resolution. The derivation of a new ring equation to eliminate ray obstruction is based on geometrical analysis of the ring fields for various numerical apertures. The analytical design applying this equation was demonstrated using the optical design software Code V in order to manufacture a spectrometer working in wavelengths of 900-1700 nm. The simulation results show that the new concept offers an analytical initial design taking the least time of calculation. The simulated spectrometer exhibited a modulation transfer function over 80% at Nyquist frequency, root-mean-square spot diameters under 8.6 μm, and a spectral resolution of 3.2 nm. The final design and its realization of a high resolution Offner spectrometer was demonstrated based on the simulation result. The equation and analytical design procedure shown here can be applied to most Offner systems regardless of the wavelength range.

  2. Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises.

    PubMed

    Lee, Shao-Hsuan; Hsiao, Tzu-Yu; Lee, Guo-She

    2015-06-01

    Sustained vocalizations of vowels [a], [i], and syllable [mə] were collected in twenty normal-hearing individuals. On vocalizations, five conditions of different audio-vocal feedback were introduced separately to the speakers including no masking, wearing supra-aural headphones only, speech-noise masking, high-pass noise masking, and broad-band-noise masking. Power spectral analysis of vocal fundamental frequency (F0) was used to evaluate the modulations of F0 and linear-predictive-coding was used to acquire first two formants. The results showed that while the formant frequencies were not significantly shifted, low-frequency modulations (<3 Hz) of F0 significantly increased with reduced audio-vocal feedback across speech sounds and were significantly correlated with auditory awareness of speakers' own voices. For sustained speech production, the motor speech controls on F0 may depend on a feedback mechanism while articulation should rely more on a feedforward mechanism. Power spectral analysis of F0 might be applied to evaluate audio-vocal control for various hearing and neurological disorders in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Timing the warm absorber in NGC4051

    NASA Astrophysics Data System (ADS)

    Silva, C.; Uttley, P.; Costantini, E.

    2015-07-01

    In this work we have combined spectral and timing analysis in the characterization of highly ionized outflows in Seyfert galaxies, the so-called warm absorbers. Here, we present our results on the extensive ˜600ks of XMM-Newton archival observations of the bright and highly variable Seyfert 1 galaxy NGC4051, whose spectrum has revealed a complex multi-component wind. Working simultaneously with RGS and PN data, we have performed a detailed analysis using a time-dependent photoionization code in combination with spectral and Fourier timing techniques. This method allows us to study in detail the response of the gas due to variations in the ionizing flux of the central source. As a result, we will show the contribution of the recombining gas to the time delays of the most highly absorbed energy bands relative to the continuum (Silva, Uttley & Costantini in prep.), which is also vital information for interpreting the continuum lags associated with propagation and reverberation effects in the inner emitting regions. Furthermore, we will illustrate how this powerful method can be applied to other sources and warm-absorber configurations, allowing for a wide range of studies.

  4. Analysis Of Irtf Spex Near-infrared Observations Of Uranus: Aerosol Optical Properties And Latitudinally Variable Methane

    NASA Astrophysics Data System (ADS)

    Tice, Dane; Irwin, P. G. J.; Fletcher, L. N.; Teanby, N. A.; Hurley, J.; Orton, G. S.; Davis, G. R.

    2012-10-01

    We present results from the analysis of near-infrared spectra of Uranus observed in August 2009 with the SpeX spectrograph at the NASA Infrared Telescope Facility (IRTF). Spectra range from 0.8 to 1.8 μm at a spatial resolution of 0.5” and a spectral resolution of R = 1,200. This data is particularly well-suited to characterize the optical properties of aerosols in the Uranian stratosphere and upper troposphere. This is in part due to its coverage shortward of 1.0 μm where methane absorption, which dominates the features in the Uranian near-infrared spectrum, weakens slightly. Another particularly useful aspect of the data is it’s specific, highly spectrally resolved (R > 4,000) coverage of the collision-induced hydrogen quadrupole absorption band at 825 nm, enabling us to differentiate between methane abundance and cloud opacity. An optimal-estimation retrieval code, NEMESIS, is used to analyze the spectra, and atmospheric models are developed that represent good agreement with data in the full spectral range analyzed. Aerosol single-scattering albedos that reveal a strong wavelength dependence will be discussed. Additionally, an analysis of latitudinal methane variability is undertaken, utilizing two methods of analysis. First, a reflectance study from locations along the central meridian is undertaken. The spectra from these locations are centered around 825 nm, where the collision-induced absorption feature of hydrogen is utilized to distinguish between latitudinal changes in the spectrum due to aerosol opacity and those due to methane variability. Secondly, high resolution retrievals from 0.8 - 0.9 μm portion of the spectrum and spectral resolutions between R = 4,000 and 4,500 are used to make the same distinction. Both methods will be compared and discussed, as will their indications supporting a methane enrichment in the equatorial region of the planet.

  5. VizieR Online Data Catalog: A library of high-S/N optical spectra of FGKM stars (Yee+, 2017)

    NASA Astrophysics Data System (ADS)

    Yee, S. W.; Petigura, E. A.; von Braun, K.

    2017-09-01

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution (R~60000), high signal-to-noise ratio (S/N~150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measure stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ~M5-F1 (Teff~3000-7000K, R*~0.1-16R{Sun}). We also present "Empirical SpecMatch" (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100K in effective temperature (Teff), 15% in stellar radius (R*), and 0.09dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ~K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70K in Teff, 10% in R*, and 0.12dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available. (2 data files).

  6. Actuator line simulations of a Joukowsky and Tjæreborg rotor using spectral element and finite volume methods

    NASA Astrophysics Data System (ADS)

    Kleusberg, E.; Sarmast, S.; Schlatter, P.; Ivanell, S.; Henningson, D. S.

    2016-09-01

    The wake structure behind a wind turbine, generated by the spectral element code Nek5000, is compared with that from the finite volume code EllipSys3D. The wind turbine blades are modeled using the actuator line method. We conduct the comparison on two different setups. One is based on an idealized rotor approximation with constant circulation imposed along the blades corresponding to Glauert's optimal operating condition, and the other is the Tjffireborg wind turbine. The focus lies on analyzing the differences in the wake structures entailed by the different codes and corresponding setups. The comparisons show good agreement for the defining parameters of the wake such as the wake expansion, helix pitch and circulation of the helical vortices. Differences can be related to the lower numerical dissipation in Nek5000 and to the domain differences at the rotor center. At comparable resolution Nek5000 yields more accurate results. It is observed that in the spectral element method the helical vortices, both at the tip and root of the actuator lines, retain their initial swirl velocity distribution for a longer distance in the near wake. This results in a lower vortex core growth and larger maximum vorticity along the wake. Additionally, it is observed that the break down process of the spiral tip vortices is significantly different between the two methods, with vortex merging occurring immediately after the onset of instability in the finite volume code, while Nek5000 simulations exhibit a 2-3 radii period of vortex pairing before merging.

  7. Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols

    DTIC Science & Technology

    2010-08-01

    ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems

  8. GALARIO: a GPU accelerated library for analysing radio interferometer observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2018-06-01

    We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.

  9. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  10. The Born-again Planetary Nebula A78: An X-Ray Twin of A30

    NASA Astrophysics Data System (ADS)

    Toalá, J. A.; Guerrero, M. A.; Todt, H.; Hamann, W.-R.; Chu, Y.-H.; Gruendl, R. A.; Schönberner, D.; Oskinova, L. M.; Marquez-Lugo, R. A.; Fang, X.; Ramos-Larios, G.

    2015-01-01

    We present the XMM-Newton discovery of X-ray emission from the planetary nebula (PN) A78, the second born-again PN detected in X-rays apart from A30. These two PNe share similar spectral and morphological characteristics: they harbor diffuse soft X-ray emission associated with the interaction between the H-poor ejecta and the current fast stellar wind and a point-like source at the position of the central star (CSPN). We present the spectral analysis of the CSPN, using for the first time an NLTE code for expanding atmospheres that takes line blanketing into account for the UV and optical spectra. The wind abundances are used for the X-ray spectral analysis of the CSPN and the diffuse emission. The X-ray emission from the CSPN in A78 can be modeled by a single C VI emission line, while the X-ray emission from its diffuse component is better described by an optically thin plasma emission model with a temperature of kT = 0.088 keV (T ≈ 1.0 × 106 K). We estimate X-ray luminosities in the 0.2-2.0 keV energy band of L X, CSPN = (1.2 ± 0.3) × 1031 erg s-1 and L X, DIFF = (9.2 ± 2.3) × 1030 erg s-1 for the CSPN and diffuse components, respectively.

  11. Congenital amusia: a cognitive disorder limited to resolved harmonics and with no peripheral basis.

    PubMed

    Cousineau, Marion; Oxenham, Andrew J; Peretz, Isabelle

    2015-01-01

    Pitch plays a fundamental role in audition, from speech and music perception to auditory scene analysis. Congenital amusia is a neurogenetic disorder that appears to affect primarily pitch and melody perception. Pitch is normally conveyed by the spectro-temporal fine structure of low harmonics, but some pitch information is available in the temporal envelope produced by the interactions of higher harmonics. Using 10 amusic subjects and 10 matched controls, we tested the hypothesis that amusics suffer exclusively from impaired processing of spectro-temporal fine structure. We also tested whether the inability of amusics to process acoustic temporal fine structure extends beyond pitch by measuring sensitivity to interaural time differences, which also rely on temporal fine structure. Further tests were carried out on basic intensity and spectral resolution. As expected, pitch perception based on spectro-temporal fine structure was impaired in amusics; however, no significant deficits were observed in amusics' ability to perceive the pitch conveyed via temporal-envelope cues. Sensitivity to interaural time differences was also not significantly different between the amusic and control groups, ruling out deficits in the peripheral coding of temporal fine structure. Finally, no significant differences in intensity or spectral resolution were found between the amusic and control groups. The results demonstrate a pitch-specific deficit in fine spectro-temporal information processing in amusia that seems unrelated to temporal or spectral coding in the auditory periphery. These results are consistent with the view that there are distinct mechanisms dedicated to processing resolved and unresolved harmonics in the general population, the former being altered in congenital amusia while the latter is spared. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.

    PubMed

    Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H

    2016-08-01

    A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.

  13. Compressive Optical Imaging Systems - Theory, Devices and Implementation

    DTIC Science & Technology

    2009-04-01

    Radon projections of the object distribution. However, more complex coding strategies have long been applied in imaging [5] and spectroscopy [6, 7...the bottom right is yellow-green, and the bottom left is yellow- orange . Note the the broad spectral ranges have made the spectral patterns very...Mr - Measured spectra Jf **v\\* f \\gt yellow orange +f •) *\\ - measured spectra*^* 1 S__ ,*3r if ^Sfc

  14. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  15. Reply on Comment on "High resolution coherence analysis between planetary and climate oscillations" by S. Holm

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2018-07-01

    Holm (ASR, 2018) claims that Scafetta (ASR 57, 2121-2135, 2016) is "irreproducible" because I would have left "undocumented" the values of two parameters (a reduced-rank index p and a regularization term δ) that he claimed to be requested in the Magnitude Squared Coherence Canonical Correlation Analysis (MSC-CCA). Yet, my analysis did not require such two parameters. In fact: (1) using the MSC-CCA reduced-rank option neither changes the result nor was needed since Scafetta (2016) statistically evaluated the significance of the coherence spectral peaks; (2) the analysis algorithm neither contains nor needed the regularization term δ . Herein I show that Holm could not replicate Scafetta (2016) because he used different analysis algorithms. In fact, although Holm claimed to be using MSC-CCA, for his Figs. 2-4 he used a MatLab code labeled "gcs_cca_1D.m" (see paragraph 2 of his Section 3), which Holm also modified, that implements a different methodology known as the Generalized Coherence Spectrum using the Canonical Correlation Analysis (GCS-CCA). This code is herein demonstrated to be unreliable under specific statistical circumstances such as those required to replicate Scafetta (2016). On the contrary, the MSC-CCA method is stable and reliable. Moreover, Holm could not replicate my result also in his Fig. 5 because there he used the basic Welch MSC algorithm by erroneously equating it to MSC-CCA. Herein I clarify step-by-step how to proceed with the correct analysis, and I fully confirm the 95% significance of my results. I add data and codes to easily replicate my results.

  16. Optical network security using unipolar Walsh code

    NASA Astrophysics Data System (ADS)

    Sikder, Somali; Sarkar, Madhumita; Ghosh, Shila

    2018-04-01

    Optical code-division multiple-access (OCDMA) is considered as a good technique to provide optical layer security. Many research works have been published to enhance optical network security by using optical signal processing. The paper, demonstrates the design of the AWG (arrayed waveguide grating) router-based optical network for spectral-amplitude-coding (SAC) OCDMA networks with Walsh Code to design a reconfigurable network codec by changing signature codes to against eavesdropping. In this paper we proposed a code reconfiguration scheme to improve the network access confidentiality changing the signature codes by cyclic rotations, for OCDMA system. Each of the OCDMA network users is assigned a unique signature code to transmit the information and at the receiving end each receiver correlates its own signature pattern a(n) with the receiving pattern s(n). The signal arriving at proper destination leads to s(n)=a(n).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.« less

  18. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  19. Fast emission spectroscopy for monitoring condensed carbon in detonation products of oxygen-deficient high explosives

    NASA Astrophysics Data System (ADS)

    Poeuf, Sandra; Baudin, Gerard; Genetier, Marc; Lefrançois, Alexandre; Cinnayya, Ashwin; Laurent, Jacquet

    2017-06-01

    A new thermochemical code, SIAME, dedicated to the study of high explosives, is currently being developed. New experimental data relative to the expansion of detonation products are required to validate the code, and a particular focus is made on solid carbon products. Two different high explosive formulations are used: a melt-cast one (RDX/TNT 60/40 % wt.) and a pressed one (HMX/VitonR 96/4 % wt.). The experimental setup allows the expansion of the products at pressures below 1 GPa in an inert medium (vacuum, helium, nitrogen and PMMA). The results of fast emission dynamic spectroscopy measurements used to monitor the detonation carbon products are reported. Two spectral signatures are identified: the first is associated to ionized gases and the second to carbon thermal radiation. The experimental spectral lines are compared with simulated spectra. The trajectory of the shock wave front is continuously recorded with a high frequency interferometer. Comparisons with numerical simulations on the hydrodynamic code Ouranoshave been done. These two measurements, using the different inert media, enable to make one step forward in the validation of the detonation products equation of state implemented in the SIAME code.

  20. SPIN: An Inversion Code for the Photospheric Spectral Line

    NASA Astrophysics Data System (ADS)

    Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan

    2017-08-01

    Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.

  1. Impact of differences in the solar irradiance spectrum on surface reflectance retrieval with different radiative transfer codes

    NASA Technical Reports Server (NTRS)

    Staenz, K.; Williams, D. J.; Fedosejevs, G.; Teillet, P. M.

    1995-01-01

    Surface reflectance retrieval from imaging spectrometer data as acquired with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has become important for quantitative analysis. In order to calculate surface reflectance from remotely measured radiance, radiative transfer codes such as 5S and MODTRAN2 play an increasing role for removal of scattering and absorption effects of the atmosphere. Accurate knowledge of the exo-atmospheric solar irradiance (E(sub 0)) spectrum at the spectral resolution of the sensor is important for this purpose. The present study investigates the impact of differences in the solar irradiance function, as implemented in a modified version of 5S (M5S), 6S, and MODTRAN2, and as proposed by Green and Gao, on the surface reflectance retrieved from AVIRIS data. Reflectance measured in situ is used as a basis of comparison.

  2. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  3. Precision Stellar Characterization of FGKM Stars using an Empirical Spectral Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Samuel W.; Petigura, Erik A.; Von Braun, Kaspar, E-mail: syee@caltech.edu

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution ( R ≈ 60,000), high signal-to-noise ratio (S/N ≈ 150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measuremore » stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ∼M5–F1 ( T {sub eff} ≈ 3000–7000 K, R {sub ⋆} ≈ 0.1–16 R {sub ⊙}). We also present “Empirical SpecMatch” (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100 K in effective temperature ( T {sub eff}), 15% in stellar radius ( R {sub ⋆}), and 0.09 dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ∼K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70 K in T {sub eff}, 10% in R {sub ⋆}, and 0.12 dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available.« less

  4. Synthetic Scene Generation of the Stennis V and V Target Range for the Calibration of Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Cao, Chang-Yong; Blonski, Slawomir; Ryan, Robert; Gasser, Jerry; Zanoni, Vicki

    1999-01-01

    The verification and validation (V&V) target range developed at Stennis Space Center is a useful test site for the calibration of remote sensing systems. In this paper, we present a simple algorithm for generating synthetic radiance scenes or digital models of this target range. The radiation propagation for the target in the solar reflective and thermal infrared spectral regions is modeled using the atmospheric radiative transfer code MODTRAN 4. The at-sensor, in-band radiance and spectral radiance for a given sensor at a given altitude is predicted. Software is developed to generate scenes with different spatial and spectral resolutions using the simulated at-sensor radiance values. The radiometric accuracy of the simulation is evaluated by comparing simulated with AVIRIS acquired radiance values. The results show that in general there is a good match between AVIRIS sensor measured and MODTRAN predicted radiance values for the target despite the fact that some anomalies exist. Synthetic scenes provide a cost-effective way for in-flight validation of the spatial and radiometric accuracy of the data. Other applications include mission planning, sensor simulation, and trade-off analysis in sensor design.

  5. Accuracy assessment and characterization of x-ray coded aperture coherent scatter spectral imaging for breast cancer classification

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2017-01-01

    Abstract. Although transmission-based x-ray imaging is the most commonly used imaging approach for breast cancer detection, it exhibits false negative rates higher than 15%. To improve cancer detection accuracy, x-ray coherent scatter computed tomography (CSCT) has been explored to potentially detect cancer with greater consistency. However, the 10-min scan duration of CSCT limits its possible clinical applications. The coded aperture coherent scatter spectral imaging (CACSSI) technique has been shown to reduce scan time through enabling single-angle imaging while providing high detection accuracy. Here, we use Monte Carlo simulations to test analytical optimization studies of the CACSSI technique, specifically for detecting cancer in ex vivo breast samples. An anthropomorphic breast tissue phantom was modeled, a CACSSI imaging system was virtually simulated to image the phantom, a diagnostic voxel classification algorithm was applied to all reconstructed voxels in the phantom, and receiver-operator characteristics analysis of the voxel classification was used to evaluate and characterize the imaging system for a range of parameters that have been optimized in a prior analytical study. The results indicate that CACSSI is able to identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) in tissue samples with a cancerous voxel identification area-under-the-curve of 0.94 through a scan lasting less than 10 s per slice. These results show that coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue within ex vivo samples. Furthermore, the results indicate potential CACSSI imaging system configurations for implementation in subsequent imaging development studies. PMID:28331884

  6. Science Goals to Requirements

    NASA Technical Reports Server (NTRS)

    Reuter, Dennis

    2015-01-01

    The presentation will be given at the 26th Annual Thermal Fluids Analysis Workshop (TFAWS 2015) hosted by the Goddard SpaceFlight Center (GSFC) Thermal Engineering Branch (Code 545): This short course will present the science goals for a variety of types of imaging and spectral measurements, the thermal requirements that these goals impose on the instruments designed to obtain the measurements, and some of the types of trades that can be made among instrument subsystems to ensure the required performance is maintained. Examples of thermal system evolution from initial concept to final implementation will be given for several actual systems.

  7. 16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.

    PubMed

    Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2012-05-21

    We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.

  8. Integrated Color Coding and Monochrome Multi-Spectral Fusion

    DTIC Science & Technology

    1999-01-01

    Suppl. pg s1002. [Katz 1987 ] Katz et al, "Application of Spectral Filtering to Missile Detection Using Staring Sensors at MWIR Wavelengths...34, Proceedings of the IRIS Conf. on Targets, Backgrounds, and Discrimination, Feb. 1987 [Morrone 1989], Morrone M.C. and D.C.,”Discrimination of spatial phase in...April) 1990, Orlando, FL. [Subramaniam 1997] Subramaniam and Biederman “Effect of Contrast Reversal on Object Recognition (ARVO 1997) Investigative

  9. Multigigahertz range-Doppler correlative processing in crystals

    NASA Astrophysics Data System (ADS)

    Harris, Todd L.; Babbitt, Wm. R.; Merkel, Kristian D.; Mohan, R. Krishna; Cole, Zachary; Olson, Andy

    2004-06-01

    Spectral-spatial holographic crystals have the unique ability to resolve fine spectral features (down to kilohertz) in an optical waveform over a broad bandwidth (over 10 gigahertz). This ability allows these crystals to record the spectral interference between spread spectrum waveforms that are temporally separated by up to several microseconds. Such crystals can be used for performing radar range-Doppler processing with fine temporal resolution. An added feature of these crystals is the long upper state lifetime of the absorbing rare earth ions, which allows the coherent integration of multiple recorded spectra, yielding integration gain and significant processing gain enhancement for selected code sets, as well as high resolution Doppler processing. Parallel processing of over 10,000 beams could be achieved with a crystal the size of a sugar cube. Spectral-spatial holographic processing and coherent integration of up to 2.5 Gigabit per second coded waveforms and of lengths up to 2047 bits has previously been reported. In this paper, we present the first demonstration of Doppler processing with these crystals. Doppler resolution down to a few hundred Hz for broadband radar signals can be achieved. The processing can be performed directly on signals modulated onto IF carriers (up to several gigahertz) without having to mix the signals down to baseband and without having to employ broadband analog to digital conversion.

  10. Rainbow correlation imaging with macroscopic twin beam

    NASA Astrophysics Data System (ADS)

    Allevi, Alessia; Bondani, Maria

    2017-06-01

    We present the implementation of a correlation-imaging protocol that exploits both the spatial and spectral correlations of macroscopic twin-beam states generated by parametric downconversion. In particular, the spectral resolution of an imaging spectrometer coupled to an EMCCD camera is used in a proof-of-principle experiment to encrypt and decrypt a simple code to be transmitted between two parties. In order to optimize the trade-off between visibility and resolution, we provide the characterization of the correlation images as a function of the spatio-spectral properties of twin beams generated at different pump power values.

  11. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  12. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  13. The effect of stellar evolution uncertainties on the rest-frame ultraviolet stellar lines of C IV and He II in high-redshift Lyman-break galaxies

    NASA Astrophysics Data System (ADS)

    Eldridge, John J.; Stanway, Elizabeth R.

    2012-01-01

    Young, massive stars dominate the rest-frame ultraviolet (UV) spectra of star-forming galaxies. At high redshifts (z > 2), these rest-frame UV features are shifted into the observed-frame optical and a combination of gravitational lensing, deep spectroscopy and spectral stacking analysis allows the stellar population characteristics of these sources to be investigated. We use our stellar population synthesis code Binary Population and Spectral Synthesis (BPASS) to fit two strong rest-frame UV spectral features in published Lyman-break galaxy spectra, taking into account the effects of binary evolution on the stellar spectrum. In particular, we consider the effects of quasi-homogeneous evolution (arising from the rotational mixing of rapidly rotating stars), metallicity and the relative abundance of carbon and oxygen on the observed strengths of He IIλ1640 Å and C IVλ1548, 1551 Å spectral lines. We find that Lyman-break galaxy spectra at z ˜ 2-3 are best fitted with moderately sub-solar metallicities, and with a depleted carbon-to-oxygen ratio. We also find that the spectra of the lowest metallicity sources are best fitted with model spectra in which the He II emission line is boosted by the inclusion of the effect of massive stars being spun-up during binary mass transfer so these rapidly rotating stars experience quasi-homogeneous evolution.

  14. PyQuant: A Versatile Framework for Analysis of Quantitative Mass Spectrometry Data*

    PubMed Central

    Mitchell, Christopher J.; Kim, Min-Sik; Na, Chan Hyun; Pandey, Akhilesh

    2016-01-01

    Quantitative mass spectrometry data necessitates an analytical pipeline that captures the accuracy and comprehensiveness of the experiments. Currently, data analysis is often coupled to specific software packages, which restricts the analysis to a given workflow and precludes a more thorough characterization of the data by other complementary tools. To address this, we have developed PyQuant, a cross-platform mass spectrometry data quantification application that is compatible with existing frameworks and can be used as a stand-alone quantification tool. PyQuant supports most types of quantitative mass spectrometry data including SILAC, NeuCode, 15N, 13C, or 18O and chemical methods such as iTRAQ or TMT and provides the option of adding custom labeling strategies. In addition, PyQuant can perform specialized analyses such as quantifying isotopically labeled samples where the label has been metabolized into other amino acids and targeted quantification of selected ions independent of spectral assignment. PyQuant is capable of quantifying search results from popular proteomic frameworks such as MaxQuant, Proteome Discoverer, and the Trans-Proteomic Pipeline in addition to several standalone search engines. We have found that PyQuant routinely quantifies a greater proportion of spectral assignments, with increases ranging from 25–45% in this study. Finally, PyQuant is capable of complementing spectral assignments between replicates to quantify ions missed because of lack of MS/MS fragmentation or that were omitted because of issues such as spectra quality or false discovery rates. This results in an increase of biologically useful data available for interpretation. In summary, PyQuant is a flexible mass spectrometry data quantification platform that is capable of interfacing with a variety of existing formats and is highly customizable, which permits easy configuration for custom analysis. PMID:27231314

  15. HO-CHUNK: Radiation Transfer code

    NASA Astrophysics Data System (ADS)

    Whitney, Barbara A.; Wood, Kenneth; Bjorkman, J. E.; Cohen, Martin; Wolff, Michael J.

    2017-11-01

    HO-CHUNK calculates radiative equilibrium temperature solution, thermal and PAH/vsg emission, scattering and polarization in protostellar geometries. It is useful for computing spectral energy distributions (SEDs), polarization spectra, and images.

  16. Evaluation of seismic design spectrum based on UHS implementing fourth-generation seismic hazard maps of Canada

    NASA Astrophysics Data System (ADS)

    Ahmed, Ali; Hasan, Rafiq; Pekau, Oscar A.

    2016-12-01

    Two recent developments have come into the forefront with reference to updating the seismic design provisions for codes: (1) publication of new seismic hazard maps for Canada by the Geological Survey of Canada, and (2) emergence of the concept of new spectral format outdating the conventional standardized spectral format. The fourth -generation seismic hazard maps are based on enriched seismic data, enhanced knowledge of regional seismicity and improved seismic hazard modeling techniques. Therefore, the new maps are more accurate and need to incorporate into the Canadian Highway Bridge Design Code (CHBDC) for its next edition similar to its building counterpart National Building Code of Canada (NBCC). In fact, the code writers expressed similar intentions with comments in the commentary of CHBCD 2006. During the process of updating codes, NBCC, and AASHTO Guide Specifications for LRFD Seismic Bridge Design, American Association of State Highway and Transportation Officials, Washington (2009) lowered the probability level from 10 to 2% and 10 to 5%, respectively. This study has brought five sets of hazard maps corresponding to 2%, 5% and 10% probability of exceedance in 50 years developed by the GSC under investigation. To have a sound statistical inference, 389 Canadian cities are selected. This study shows the implications of the changes of new hazard maps on the design process (i.e., extent of magnification or reduction of the design forces).

  17. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  18. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  19. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  20. Analysis of longwave radiation for the Earth-atmosphere system

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Venuru, C. S.; Subramanian, S. V.

    1983-01-01

    Accurate radiative transfer models are used to determine the upwelling atmospheric radiance and net radiative flux in the entire longwave spectral range. The validity of the quasi-random band model is established by comparing the results of this model with those of line-by-line formulations and with available theoretical and experimental results. Existing radiative transfer models and computer codes are modified to include various surface and atmospheric effects (surface reflection, nonequilibrium radiation, and cloud effects). The program is used to evaluate the radiative flux in clear atmosphere, provide sensitivity analysis of upwelling radiance in the presence of clouds, and determine the effects of various climatological parameters on the upwelling radiation and anisotropic function. Homogeneous and nonhomogeneous gas emissivities can also be evaluated under different conditions.

  1. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  2. Flexible digital modulation and coding synthesis for satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John

    1991-01-01

    An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.

  3. MUFFSgenMC: An Open Source MUon Flexible Framework for Spectral GENeration for Monte Carlo Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzidakis, Stylianos; Greulich, Christopher

    A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.

  4. Wavelet Spectral Finite Elements for Wave Propagation in Composite Plates with Damages - Years 3-4

    DTIC Science & Technology

    2014-05-23

    study of Lamb wave interactions with holes and through thickness defects in thin metal plates . Distribution Code A: Approved for public release...Propagation in Composite Plates with Damages - Years 3-4 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA23861214005 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT The objective of the proposed efforts: -Formulated Wavelet Spectral element for a healthy composite plates and used the formulated

  5. Autoregressive Methods for Spectral Estimation from Interferograms.

    DTIC Science & Technology

    1986-09-19

    RL83 6?6 AUTOREGRESSIVE METHODS FOR SPECTRAL. ESTIMTION FROM / SPACE ENGINEERING E N RICHARDS ET AL. 19 SEPINEFRGAS.()UA TT NV GNCNE O C: 31SSF...was AUG1085 performed under subcontract to . Center for Space Engineering Utah State University Logan, UT 84322-4140 4 4 Scientific Report No. 17 AFGL...MONITORING ORGANIZATION Center for Space Engineering (iapplicable) Air Force Geophysics Laboratory e. AORESS (City. State and ZIP Code) 7b. AOORESS (City

  6. The Virtual Observatory Service TheoSSA: Establishing a Database of Synthetic Stellar Flux Standards I. NLTE Spectral Analysis of the DA-Type White Dwarf G191-B2B *,**,***,****

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Werner, K.; Bohlin, R.; Kruk, J. W.

    2013-01-01

    Hydrogen-rich, DA-type white dwarfs are particularly suited as primary standard stars for flux calibration. State-of-the-art NLTE models consider opacities of species up to trans-iron elements and provide reliable synthetic stellar-atmosphere spectra to compare with observations. Aims. We will establish a database of theoretical spectra of stellar flux standards that are easily accessible via a web interface. Methods. In the framework of the Virtual Observatory, the German Astrophysical Virtual Observatory developed the registered service TheoSSA. It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code. In case of the DA white dwarf G191-B2B, we demonstrate that the model reproduces not only its overall continuum shape but also the numerous metal lines exhibited in its ultraviolet spectrum. Results. TheoSSA is in operation and contains presently a variety of SEDs for DA-type white dwarfs. It will be extended in the near future and can host SEDs of all primary and secondary flux standards. The spectral analysis of G191-B2B has shown that our hydrostatic models reproduce the observations best at Teff =60 000 +/- 2000K and log g=7.60 +/- 0.05.We newly identified Fe vi, Ni vi, and Zn iv lines. For the first time, we determined the photospheric zinc abundance with a logarithmic mass fraction of -4.89 (7.5 × solar). The abundances of He (upper limit), C, N, O, Al, Si, O, P, S, Fe, Ni, Ge, and Sn were precisely determined. Upper abundance limits of about 10% solar were derived for Ti, Cr, Mn, and Co. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of all astronomical data and cross-calibration between different instruments can be based on the same models and SEDs calculated with different model-atmosphere codes and are easy to compare.

  7. SCPS: a fast implementation of a spectral method for detecting protein families on a genome-wide scale.

    PubMed

    Nepusz, Tamás; Sasidharan, Rajkumar; Paccanaro, Alberto

    2010-03-09

    An important problem in genomics is the automatic inference of groups of homologous proteins from pairwise sequence similarities. Several approaches have been proposed for this task which are "local" in the sense that they assign a protein to a cluster based only on the distances between that protein and the other proteins in the set. It was shown recently that global methods such as spectral clustering have better performance on a wide variety of datasets. However, currently available implementations of spectral clustering methods mostly consist of a few loosely coupled Matlab scripts that assume a fair amount of familiarity with Matlab programming and hence they are inaccessible for large parts of the research community. SCPS (Spectral Clustering of Protein Sequences) is an efficient and user-friendly implementation of a spectral method for inferring protein families. The method uses only pairwise sequence similarities, and is therefore practical when only sequence information is available. SCPS was tested on difficult sets of proteins whose relationships were extracted from the SCOP database, and its results were extensively compared with those obtained using other popular protein clustering algorithms such as TribeMCL, hierarchical clustering and connected component analysis. We show that SCPS is able to identify many of the family/superfamily relationships correctly and that the quality of the obtained clusters as indicated by their F-scores is consistently better than all the other methods we compared it with. We also demonstrate the scalability of SCPS by clustering the entire SCOP database (14,183 sequences) and the complete genome of the yeast Saccharomyces cerevisiae (6,690 sequences). Besides the spectral method, SCPS also implements connected component analysis and hierarchical clustering, it integrates TribeMCL, it provides different cluster quality tools, it can extract human-readable protein descriptions using GI numbers from NCBI, it interfaces with external tools such as BLAST and Cytoscape, and it can produce publication-quality graphical representations of the clusters obtained, thus constituting a comprehensive and effective tool for practical research in computational biology. Source code and precompiled executables for Windows, Linux and Mac OS X are freely available at http://www.paccanarolab.org/software/scps.

  8. Multi-band morpho-Spectral Component Analysis Deblending Tool (MuSCADeT): Deblending colourful objects

    NASA Astrophysics Data System (ADS)

    Joseph, R.; Courbin, F.; Starck, J.-L.

    2016-05-01

    We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html

  9. SYNTHETIC HYDROGEN SPECTRA OF OSCILLATING PROMINENCE SLABS IMMERSED IN THE SOLAR CORONA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zapiór, M.; Heinzel, P.; Oliver, R.

    We study the behavior of H α and H β spectral lines and their spectral indicators in an oscillating solar prominence slab surrounded by the solar corona, using an MHD model combined with a 1D radiative transfer code taken in the line of sight perpendicular to the slab. We calculate the time variation of the Doppler shift, half-width, and maximum intensity of the H α and H β spectral lines for different modes of oscillation. We find a non-sinusoidal time dependence of some spectral parameters with time. Because H α and H β spectral indicators have different behavior for differentmore » modes, caused by differing optical depths of formation and different plasma parameter variations in time and along the slab, they may be used for prominence seismology, especially to derive the internal velocity field in prominences.« less

  10. SPECTRAL CLASSIFICATION AND PROPERTIES OF THE O Vz STARS IN THE GALACTIC O-STAR SPECTROSCOPIC SURVEY (GOSSS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arias, Julia I.; Barbá, Rodolfo H.; Sabín-Sanjulián, Carolina

    On the basis of the Galactic O Star Spectroscopic Survey (GOSSS), we present a detailed systematic investigation of the O Vz stars. The currently used spectral classification criteria are rediscussed, and the Vz phenomenon is recalibrated through the addition of a quantitative criterion based on the equivalent widths of the He i λ 4471, He ii λ 4542, and He ii λ 4686 spectral lines. The GOSSS O Vz and O V populations resulting from the newly adopted spectral classification criteria are comparatively analyzed. The locations of the O Vz stars are probed, showing a concentration of the most extrememore » cases toward the youngest star-forming regions. The occurrence of the Vz spectral peculiarity in a solar-metallicity environment, as predicted by the fastwind code, is also investigated, confirming the importance of taking into account several processes for the correct interpretation of the phenomenon.« less

  11. Intercomparison of Radiation Codes in Climate Models (ICRCCM) Infrared (Clear-Sky) Line-by Line Radiative Fluxes (DB1002)

    DOE Data Explorer

    Arking, A.; Ridgeway, B.; Clough, T.; Iacono, M.; Fomin, B.; Trotsenko, A.; Freidenreich, S.; Schwarzkopf, D.

    1994-01-01

    The intercomparison of Radiation Codes in Climate Models (ICRCCM) study was launched under the auspices of the World Meteorological Organization and with the support of the U.S. Department of Energy to document differences in results obtained with various radiation codes and radiation parameterizations in general circulation models (GCMs). ICRCCM produced benchmark, longwave, line-by-line (LBL) fluxes that may be compared against each other and against models of lower spectral resolution. During ICRCCM, infrared fluxes and cooling rates for several standard model atmospheres with varying concentrations of water vapor, carbon dioxide, and ozone were calculated with LBL methods at resolutions of 0.01 cm-1 or higher. For comparison with other models, values were summed for the IR spectrum and given at intervals of 5 or 10 cm-1. This archive contains fluxes for ICRCCM-prescribed clear-sky cases. Radiative flux and cooling-rate profiles are given for specified atmospheric profiles for temperature, water vapor, and ozone-mixing ratios. The archive contains 328 files, including spectral summaries, formatted data files, and a variety of programs (i.e., C-shell scripts, FORTRAN codes, and IDL programs) to read, reformat, and display data. Collectively, these files require approximately 59 MB of disk space.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillion, D.

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formates used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved. By identifying points on these lines and fitting their shapes by polyn.« less

  13. Spectral Cauchy characteristic extraction of strain, news and gravitational radiation flux

    NASA Astrophysics Data System (ADS)

    Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey

    2016-11-01

    We present a new approach for the Cauchy-characteristic extraction (CCE) of gravitational radiation strain, news function, and the flux of the energy-momentum, supermomentum and angular momentum associated with the Bondi-Metzner-Sachs asymptotic symmetries. In CCE, a characteristic evolution code takes numerical data on an inner worldtube supplied by a Cauchy evolution code, and propagates it outwards to obtain the space-time metric in a neighborhood of null infinity. The metric is first determined in a scrambled form in terms of coordinates determined by the Cauchy formalism. In prior treatments, the waveform is first extracted from this metric and then transformed into an asymptotic inertial coordinate system. This procedure provides the physically proper description of the waveform and the radiated energy but it does not generalize to determine the flux of angular momentum or supermomentum. Here we formulate and implement a new approach which transforms the full metric into an asymptotic inertial frame and provides a uniform treatment of all the radiation fluxes associated with the asymptotic symmetries. Computations are performed and calibrated using the spectral Einstein code.

  14. Fitting the spectral energy distributions of galaxies with CIGALE : Code Investigating GALaxy Emission

    NASA Astrophysics Data System (ADS)

    Giovannoli, E.; Buat, V.

    2013-03-01

    We use the code CIGALE (Code Investigating Galaxies Emission: Burgarella et al. 2005; Noll et al. 2009) which provides physical information about galaxies by fitting their UV (ultraviolet)-to-IR (infrared) spectral energy distribuition (SED). CIGALE is based on the use of a UV-optical stellar SED plus a dust IR-emitting component. We study a sample of 136 Luminous Infrared Galaxies (LIRGs) at z˜0.7 in the ECDF-S previously studied in Giovannoli et al. (2011). We focus on the way the empirical Dale & Helou (2002) templates reproduce the observed SEDs of the LIRGs. Fig. 1 shows the total infrared luminosity (L IR ) provided by CIGALE using the 64 templates (x axis) and using 2 templates (y axis) representative of the whole sample. Despite the larger dispersion when only 1 or 2 Herschel data are available, the agreement between both values is good with Δ log L IR = 0.0013 ± 0.045 dex. We conclude that 2 IR SEDs can be used alone to determine the L IR of LIRGs at z˜0.7 in an SED-fitting procedure.

  15. Actinic Flux Calculations: A Model Sensitivity Study

    NASA Technical Reports Server (NTRS)

    Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.

  16. How many records should be used in ASCE/SEI-7 ground motion scaling procedure?

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2012-01-01

    U.S. national building codes refer to the ASCE/SEI-7 provisions for selecting and scaling ground motions for use in nonlinear response history analysis of structures. Because the limiting values for the number of records in the ASCE/SEI-7 are based on engineering experience, this study examines the required number of records statistically, such that the scaled records provide accurate, efficient, and consistent estimates of “true” structural responses. Based on elastic–perfectly plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI-7 scaling procedure is applied to 480 sets of ground motions; the number of records in these sets varies from three to ten. As compared to benchmark responses, it is demonstrated that the ASCE/SEI-7 scaling procedure is conservative if fewer than seven ground motions are employed. Utilizing seven or more randomly selected records provides more accurate estimate of the responses. Selecting records based on their spectral shape and design spectral acceleration increases the accuracy and efficiency of the procedure.

  17. Instabilities and Turbulence Generation by Pick-Up Ion Distributions in the Outer Heliosheath

    NASA Astrophysics Data System (ADS)

    Weichman, K.; Roytershteyn, V.; Delzanno, G. L.; Pogorelov, N.

    2017-12-01

    Pick-up ions (PUIs) play a significant role in the dynamics of the heliosphere. One problem that has attracted significant attention is the stability of ring-like distributions of PUIs and the electromagnetic fluctuations that could be generated by PUI distributions. For example, PUI stability is relevant to theories attempting to identify the origins of the IBEX ribbon. PUIs have previously been investigated by linear stability analysis of model (e.g. Gaussian) rings and corresponding computer simulations. The majority of these simulations utilized particle-in-cell methods which suffer from accuracy limitations imposed by the statistical noise associated with representing the plasma by a relatively small number of computational particles. In this work, we utilize highly accurate spectral Vlasov simulations conducted using the fully kinetic implicit code SPS (Spectral Plasma Solver) to investigate the PUI distributions inferred from a global heliospheric model (Heerikhuisen et al., 2016). Results are compared with those obtained by hybrid and fully kinetic particle-in-cell methods.

  18. Global spectral irradiance variability and material discrimination at Boulder, Colorado.

    PubMed

    Pan, Zhihong; Healey, Glenn; Slater, David

    2003-03-01

    We analyze 7,258 global spectral irradiance functions over 0.4-2.2 microm that were acquired over a wide range of conditions at Boulder, Colorado, during the summer of 1997. We show that low-dimensional linear models can be used to capture the variability in these spectra over both the visible and the 0.4-2.2 microm spectral ranges. Using a linear model, we compare the Boulder data with the previous study of Judd et al. [J. Opt. Soc. Am. 54, 1031 (1964)] over the visible wavelengths. We also examine the agreement of the Boulder data with a spectral database generated by using the MODTRAN 4.0 radiative transfer code. We use a database of 223 minerals to consider the effect of the spectral variability in the global spectral irradiance functions on hyperspectral material identification. We show that the 223 minerals can be discriminated accurately over the variability in the Boulder data with subspace projection techniques.

  19. Hyper-Spectral Synthesis of Active OB Stars Using GLaDoS

    NASA Astrophysics Data System (ADS)

    Hill, N. R.; Townsend, R. H. D.

    2016-11-01

    In recent years there has been considerable interest in using graphics processing units (GPUs) to perform scientific computations that have traditionally been handled by central processing units (CPUs). However, there is one area where the scientific potential of GPUs has been overlooked - computer graphics, the task they were originally designed for. Here we introduce GLaDoS, a hyper-spectral code which leverages the graphics capabilities of GPUs to synthesize spatially and spectrally resolved images of complex stellar systems. We demonstrate how GLaDoS can be applied to calculate observables for various classes of stars including systems with inhomogenous surface temperatures and contact binaries.

  20. Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations

    NASA Astrophysics Data System (ADS)

    Tritsis, A.; Yorke, H.; Tassis, K.

    2018-05-01

    We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.

  1. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  2. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  3. Labview Interface Concepts Used in NASA Scientific Investigations and Virtual Instruments

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Parker, Bradford H.; Rapchun, David A.; Jones, Hollis H.; Cao, Wei

    2001-01-01

    This article provides an overview of several software control applications developed for NASA using LabVIEW. The applications covered here include (1) an Ultrasonic Measurement System for nondestructive evaluation of advanced structural materials, an Xray Spectral Mapping System for characterizing the quality and uniformity of developing photon detector materials, (2) a Life Testing System for these same materials, (3) and the instrument panel for an aircraft mounted Cloud Absorption Radiometer that measures the light scattered by clouds in multiple spectral bands. Many of the software interface concepts employed are explained. Panel layout and block diagram (code) strategies for each application are described. In particular, some of the more unique features of the applications' interfaces and source code are highlighted. This article assumes that the reader has a beginner-to-intermediate understanding of LabVIEW methods.

  4. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  5. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  6. Terascale spectral element algorithms and implementations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, P. F.; Tufo, H. M.

    1999-08-17

    We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.

  7. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  8. Identification of spilled oils by NIR spectroscopy technology based on KPCA and LSSVM

    NASA Astrophysics Data System (ADS)

    Tan, Ailing; Bi, Weihong

    2011-08-01

    Oil spills on the sea surface are seen relatively often with the development of the petroleum exploitation and transportation of the sea. Oil spills are great threat to the marine environment and the ecosystem, thus the oil pollution in the ocean becomes an urgent topic in the environmental protection. To develop the oil spill accident treatment program and track the source of the spilled oils, a novel qualitative identification method combined Kernel Principal Component Analysis (KPCA) and Least Square Support Vector Machine (LSSVM) was proposed. The proposed method adapt Fourier transform NIR spectrophotometer to collect the NIR spectral data of simulated gasoline, diesel fuel and kerosene oil spills samples and do some pretreatments to the original spectrum. We use the KPCA algorithm which is an extension of Principal Component Analysis (PCA) using techniques of kernel methods to extract nonlinear features of the preprocessed spectrum. Support Vector Machines (SVM) is a powerful methodology for solving spectral classification tasks in chemometrics. LSSVM are reformulations to the standard SVMs which lead to solving a system of linear equations. So a LSSVM multiclass classification model was designed which using Error Correcting Output Code (ECOC) method borrowing the idea of error correcting codes used for correcting bit errors in transmission channels. The most common and reliable approach to parameter selection is to decide on parameter ranges, and to then do a grid search over the parameter space to find the optimal model parameters. To test the proposed method, 375 spilled oil samples of unknown type were selected to study. The optimal model has the best identification capabilities with the accuracy of 97.8%. Experimental results show that the proposed KPCA plus LSSVM qualitative analysis method of near infrared spectroscopy has good recognition result, which could work as a new method for rapid identification of spilled oils.

  9. Jet Noise Diagnostics Supporting Statistical Noise Prediction Methods

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2006-01-01

    The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity. NASA has been investing in development of statistical jet noise prediction tools because these seem to fit the middle ground that allows enough flexibility and fidelity for jet noise source diagnostics while having reasonable computational requirements. These tools rely on Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) solutions as input for computing far-field spectral directivity using an acoustic analogy. There are many ways acoustic analogies can be created, each with a series of assumptions and models, many often taken unknowingly. And the resulting prediction can be easily reverse-engineered by altering the models contained within. However, only an approach which is mathematically sound, with assumptions validated and modeled quantities checked against direct measurement will give consistently correct answers. Many quantities are modeled in acoustic analogies precisely because they have been impossible to measure or calculate, making this requirement a difficult task. The NASA team has spent considerable effort identifying all the assumptions and models used to take the Navier-Stokes equations to the point of a statistical calculation via an acoustic analogy very similar to that proposed by Lilley. Assumptions have been identified and experiments have been developed to test these assumptions. In some cases this has resulted in assumptions being changed. Beginning with the CFD used as input to the acoustic analogy, models for turbulence closure used in RANS CFD codes have been explored and compared against measurements of mean and rms velocity statistics over a range of jet speeds and temperatures. Models for flow parameters used in the acoustic analogy, most notably the space-time correlations of velocity, have been compared against direct measurements, and modified to better fit the observed data. These measurements have been extremely challenging for hot, high speed jets, and represent a sizeable investment in instrumentation development. As an intermediate check that the analysis is predicting the physics intended, phased arrays have been employed to measure source distributions for a wide range of jet cases. And finally, careful far-field spectral directivity measurements have been taken for final validation of the prediction code. Examples of each of these experimental efforts will be presented. The main result of these efforts is a noise prediction code, named JeNo, which is in middevelopment. JeNo is able to consistently predict spectral directivity, including aft angle directivity, for subsonic cold jets of most geometries. Current development on JeNo is focused on extending its capability to hot jets, requiring inclusion of a previously neglected second source associated with thermal fluctuations. A secondary result of the intensive experimentation is the archiving of various flow statistics applicable to other acoustic analogies and to development of time-resolved prediction methods. These will be of lasting value as we look ahead at future challenges to the aeroacoustic experimentalist.

  10. Fundamentals of Free-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Moision, Bruce; Erkmen, Baris

    2012-01-01

    Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.

  11. ALDAS user's manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.

    1991-01-01

    The Acoustic Laboratory Data Acquisition System (ALDAS) is an inexpensive, transportable means to digitize and analyze data. The system is based on the Macintosh 2 family of computers, with internal analog-to-digital boards providing four channels of simultaneous data acquisition at rates up to 50,000 samples/sec. The ALDAS software package, written for use with rotorcraft acoustics, performs automatic acoustic calibration of channels, data display, two types of cycle averaging, and spectral amplitude analysis. The program can use data obtained from internal analog-to-digital conversion, or discrete external data imported in ASCII format. All aspects of ALDAS can be improved as new hardware becomes available and new features are introduced into the code.

  12. Optimization of compressive 4D-spatio-spectral snapshot imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  13. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  14. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  15. A new approach for modeling composite materials

    NASA Astrophysics Data System (ADS)

    Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.

    2013-03-01

    The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.

  16. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  17. Jet Measurements for Development of Jet Noise Prediction Tools

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2006-01-01

    The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity.

  18. Colorimetric analysis of outdoor illumination across varieties of atmospheric conditions.

    PubMed

    Peyvandi, Shahram; Hernández-Andrés, Javier; Olmo, F J; Nieves, Juan Luis; Romero, Javier

    2016-06-01

    Solar illumination at ground level is subject to a good deal of change in spectral and colorimetric properties. With an aim of understanding the influence of atmospheric components and phases of daylight on colorimetric specifications of downward radiation, more than 5,600,000 spectral irradiance functions of daylight, sunlight, and skylight were simulated by the radiative transfer code, SBDART [Bull. Am. Meteorol. Soc.79, 2101 (1998)], under the atmospheric conditions of clear sky without aerosol particles, clear sky with aerosol particles, and overcast sky. The interquartile range of the correlated color temperatures (CCT) for daylight indicated values from 5712 to 7757 K among the three atmospheric conditions. A minimum CCT of ∼3600  K was found for daylight when aerosol particles are present in the atmosphere. Our analysis indicated that hemispheric daylight with CCT less than 3600 K may be observed in rare conditions in which the level of aerosol is high in the atmosphere. In an atmosphere with aerosol particles, we also found that the chromaticity of daylight may shift along the green-purple direction of the Planckian locus, with a magnitude depending on the spectral extinction by aerosol particles and the amount of water vapor in the atmosphere. The data analysis showed that an extremely high value of CCT, in an atmosphere without aerosol particles, for daylight and skylight at low sun, is mainly due to the effect of Chappuis absorption band of ozone at ∼600  nm. In this paper, we compare our data with well-known observations from previous research, including the ones used by the CIE to define natural daylight illuminants.

  19. Effect of the atmosphere on the color coordinates of sunlit surfaces

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Viljoen, Johan W.

    2016-02-01

    Aerosol attenuation in the atmosphere has a relatively weak spectral variation compared to molecular absorption. However, the solar spectral irradiance differs considerably for the sun at high zenith angles versus the sun at low zenith angles. The perceived color of a sunlit object depends on the object's spectral reflectivity as well as the irradiance spectrum. The color coordinates of the sunlit object, hence also the color balance in a scene, shift with changes in the solar zenith angle. The work reported here does not claim accurate color measurement. With proper calibration mobile phones may provide reasonably accurate color measurement, but the mobile phones used for taking these pictures and videos are not scientific instruments and were not calibrated. The focus here is on the relative shift of the observed colors, rather than absolute color. The work in this paper entails the theoretical analysis of color coordinates of surfaces and how they change for different colored surfaces. Then follows three separate investigations: (1) Analysis of a number of detailed atmospheric radiative transfer code (Modtran) runs to show from the theory how color coordinates should change. (2) Analysis of a still image showing how the colors of two sample surfaces vary between sunlit and shaded areas. (3) Time lapse video recordings showing how the color coordinates of a few surfaces change as a function of time of day. Both the theoretical and experimental work shows distinct shifts in color as function of atmospheric conditions. The Modtran simulations demonstrate the effect from clear atmospheric conditions (no aerosol) to low visibility conditions (5 km visibility). Even under moderate atmospheric conditions the effect was surprisingly large. The experimental work indicated significant shifts during the diurnal cycle.

  20. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  1. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.

    2017-11-01

    This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  2. The amplitude effects of sedimentary basins on through-passing surface waves

    NASA Astrophysics Data System (ADS)

    Feng, L.; Ritzwoller, M. H.; Pasyanos, M.

    2016-12-01

    Understanding the effect of sedimentary basins on through-passing surface waves is essential in many aspects of seismology, including the estimation of the magnitude of natural and anthropogenic events, the study of the attenuation properties of Earth's interior, and the analysis of ground motion as part of seismic hazard assessment. In particular, knowledge of the physical causes of amplitude variations is important in the application of the Ms:mb discriminant of nuclear monitoring. Our work addresses two principal questions, both in the period range between 10 s and 20 s. The first question is: In what respects can surface wave propagation through 3D structures be simulated as 2D membrane waves? This question is motivated by our belief that surface wave amplitude effects down-stream from sedimentary basins result predominantly from elastic focusing and defocusing, which we understand as analogous to the effect of a lens. To the extent that this understanding is correct, 2D membrane waves will approximately capture the amplitude effects of focusing and defocusing. We address this question by applying the 3D simulation code SW4 (a node-based finite-difference code for 3D seismic wave simulation) and the 2D code SPECFEM2D (a spectral element code for 2D seismic wave simulation). Our results show that for surface waves propagating downstream from 3D sedimentary basins, amplitude effects are mostly caused by elastic focusing and defocusing which is modeled accurately as a 2D effect. However, if the epicentral distance is small, higher modes may contaminate the fundamental mode, which may result in large errors in the 2D membrane wave approximation. The second question is: Are observations of amplitude variations across East Asia following North Korean nuclear tests consistent with simulations of amplitude variations caused by elastic focusing/defocusing through a crustal reference model of China (Shen et al., A seismic reference model for the crust and uppermost mantle beneath China from surface wave dispersion, Geophys. J. Int., 206(2), 2015)? We simulate surface wave propagation across Eastern Asia with SES3D (a spectral element code for 3D seismic wave simulation) and observe significant amplitude variations caused by focusing and defocusing with a magnitude that is consistent with the observations.

  3. ExoData: A Python package to handle large exoplanet catalogue data

    NASA Astrophysics Data System (ADS)

    Varley, Ryan

    2016-10-01

    Exoplanet science often involves using the system parameters of real exoplanets for tasks such as simulations, fitting routines, and target selection for proposals. Several exoplanet catalogues are already well established but often lack a version history and code friendly interfaces. Software that bridges the barrier between the catalogues and code enables users to improve the specific repeatability of results by facilitating the retrieval of exact system parameters used in articles results along with unifying the equations and software used. As exoplanet science moves towards large data, gone are the days where researchers can recall the current population from memory. An interface able to query the population now becomes invaluable for target selection and population analysis. ExoData is a Python interface and exploratory analysis tool for the Open Exoplanet Catalogue. It allows the loading of exoplanet systems into Python as objects (Planet, Star, Binary, etc.) from which common orbital and system equations can be calculated and measured parameters retrieved. This allows researchers to use tested code of the common equations they require (with units) and provides a large science input catalogue of planets for easy plotting and use in research. Advanced querying of targets is possible using the database and Python programming language. ExoData is also able to parse spectral types and fill in missing parameters according to programmable specifications and equations. Examples of use cases are integration of equations into data reduction pipelines, selecting planets for observing proposals and as an input catalogue to large scale simulation and analysis of planets. ExoData is a Python package available freely on GitHub.

  4. New algorithm for lossless hyper-spectral image compression with mixing transform to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.

  5. A spectrally accurate boundary-layer code for infinite swept wings

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1994-01-01

    This report documents the development, validation, and application of a spectrally accurate boundary-layer code, WINGBL2, which has been designed specifically for use in stability analyses of swept-wing configurations. Currently, we consider only the quasi-three-dimensional case of an infinitely long wing of constant cross section. The effects of streamwise curvature, streamwise pressure gradient, and wall suction and/or blowing are taken into account in the governing equations and boundary conditions. The boundary-layer equations are formulated both for the attachment-line flow and for the evolving boundary layer. The boundary-layer equations are solved by marching in the direction perpendicular to the leading edge, for which high-order (up to fifth) backward differencing techniques are used. In the wall-normal direction, a spectral collocation method, based upon Chebyshev polynomial approximations, is exploited. The accuracy, efficiency, and user-friendliness of WINGBL2 make it well suited for applications to linear stability theory, parabolized stability equation methodology, direct numerical simulation, and large-eddy simulation. The method is validated against existing schemes for three test cases, including incompressible swept Hiemenz flow and Mach 2.4 flow over an airfoil swept at 70 deg to the free stream.

  6. Theoretical Stark broadening parameters for spectral lines arising from the 2p5ns, 2p5np and 2p5nd electronic configurations of Mg III

    NASA Astrophysics Data System (ADS)

    Colón, C.; Moreno-Díaz, C.; Alonso-Medina, A.

    2013-10-01

    In the present work we report theoretical Stark widths and shifts calculated using the Griem semi-empirical approach, corresponding to 237 spectral lines of Mg III. Data are presented for an electron density of 1017 cm-3 and temperatures T = 0.5-10.0 (104K). The matrix elements used in these calculations have been determined from 23 configurations of Mg III: 2s22p6, 2s22p53p, 2s22p54p, 2s22p54f and 2s22p55f for even parity and 2s22p5ns (n = 3-6), 2s22p5nd (n = 3-9), 2s22p55g and 2s2p6np (n = 3-8) for odd parity. For the intermediate coupling (IC) calculations, we use the standard method of least-squares fitting from experimental energy levels by means of the Cowan computer code. Also, in order to test the matrix elements used in our calculations, we present calculated values of 70 transition probabilities of Mg III spectral lines and 14 calculated values of radiative lifetimes of Mg III levels. There is good agreement between our calculations and experimental radiative lifetimes. Spectral lines of Mg III are relevant in astrophysics and also play an important role in the spectral analysis of laboratory plasma. Theoretical trends of the Stark broadening parameter versus the temperature for relevant lines are presented. No values of Stark parameters can be found in the bibliography.

  7. First Global Estimates of Anthropogenic Shortwave Forcing by Methane

    NASA Astrophysics Data System (ADS)

    Collins, William; Feldman, Daniel; Kuo, Chaincy

    2017-04-01

    Although the primary well-mixed greenhouse gases (WMGHGs) absorb both shortwave and longwave radiation, to date assessments of the effects from human-induced increases in atmospheric concentrations of WMGHGs have focused almost exclusively on quantifying the longwave radiative forcing of these gases. However, earlier studies have shown that the shortwave effects of WMGHGs are comparable to many less important longwave forcing agents routinely in these assessments, for example the effects of aircraft contrails, stratospheric anthropogenic methane, and stratospheric water vapor from the oxidation of this methane. These earlier studies include the Radiative Transfer Model Intercomparison Project (RTMIP; Collins et al. 2006) conducted using line-by-line radiative transfer codes as well as the radiative parameterizations from most of the global climate models (GCMs) assembled for the Coupled Model Intercomparison Project (CMIP-3). In this talk, we discuss the first global estimates of the shortwave radiative forcing by methane due to the anthropogenic increase in CH4 between pre-industrial and present-day conditions. This forcing is a balance between reduced heating due to absorption of downwelling sunlight in the stratosphere and increased heating due to absorption of upwelling sunlight reflected from the surface as well clouds and aerosols in the troposphere. These estimates are produced using the Observing System Simulation Experiment (OSSE) framework we have developed for NASA's upcoming Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. The OSSE is designed to compute the monthly mean shortwave radiative forcing based upon global gridded atmospheric and surface conditions extracted from either the meteorological reanalyses collected for the Analysis for MIPs (Ana4MIPs) or the CMIP-5 multi-GCM archive analyzed in the Fifth Assessment Report (AR-5) of the Intergovernmental Panel on Climate Change (IPCC). The OSSE combines these atmospheric conditions with an observationally derived prescription for the Earth's spectral surface albedo as inputs to the MODerate resolution atmospheric TRANsmission (MODTRAN) code. MODTRAN is designed to model atmospheric propagation of electromagnetic radiation for the 100-50,000 1/cm (0.2 to 100 micrometers) spectral range. This covers the spectrum from middle ultraviolet to visible light to far infrared. The most recently released version of the code, MODTRAN6, provides a spectral resolution of 0.2 1/cm using its 0.1 1/cm band model algorithm.

  8. Detection of counterfeit electronic components through ambient mass spectrometry and chemometrics.

    PubMed

    Pfeuffer, Kevin P; Caldwell, Jack; Shelley, Jake T; Ray, Steven J; Hieftje, Gary M

    2014-09-21

    In the last several years, illicit electronic components have been discovered in the inventories of several distributors and even installed in commercial and military products. Illicit or counterfeit electronic components include a broad category of devices that can range from the correct unit with a more recent date code to lower-specification or non-working systems with altered names, manufacturers and date codes. Current methodologies for identification of counterfeit electronics rely on visual microscopy by expert users and, while effective, are very time-consuming. Here, a plasma-based ambient desorption/ionization source, the flowing atmospheric pressure afterglow (FAPA) is used to generate a mass-spectral fingerprint from the surface of a variety of discrete electronic integrated circuits (ICs). Chemometric methods, specifically principal component analysis (PCA) and the bootstrapped error-adjusted single-sample technique (BEAST), are used successfully to differentiate between genuine and counterfeit ICs. In addition, chemical and physical surface-removal techniques are explored and suggest which surface-altering techniques were utilized by counterfeiters.

  9. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  10. Simulation of radiation in laser produced plasmas

    NASA Astrophysics Data System (ADS)

    Colombant, D. G.; Klapisch, M.; Deniz, A. V.; Weaver, J.; Schmitt, A.

    1999-11-01

    The radiation hydrodynamics code FAST1D(J.H.Gardner,A.J.Schmitt,J.P.Dahlburg,C.J.Pawley,S.E.Bodner,S.P.Obenschain,V.Serlin and Y.Aglitskiy,Phys. Plasmas,5,1935(1998)) was used directly (i.e. without postprocessor) to simulate radiation emitted from flat targets irradiated by the Nike laser, from 10^12 W/cm^2 to 10^13W/cm^2. We use enough photon groups to resolve spectral lines. Opacities are obtained from the STA code(A.Bar-Shalom,J.Oreg,M.Klapisch and T.Lehecka,Phys.Rev.E,59,3512(1999)), and non LTE effects are described with the Busquet model(M.Busquet,Phys.Fluids B,5,4191(1993)). Results are compared to transmission grating spectra in the range 100-600eV, and to time-resolved calibrated filtered diodes (spectral windows around 100, 180, 280 and 450 eV).

  11. Laser-plasma interactions with a Fourier-Bessel particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andriyash, Igor A., E-mail: igor.andriyash@gmail.com; LOA, ENSTA ParisTech, CNRS, Ecole polytechnique, Université Paris-Saclay, 828 bd des Maréchaux, 91762 Palaiseau cedex; Lehe, Remi

    A new spectral particle-in-cell (PIC) method for plasma modeling is presented and discussed. In the proposed scheme, the Fourier-Bessel transform is used to translate the Maxwell equations to the quasi-cylindrical spectral domain. In this domain, the equations are solved analytically in time, and the spatial derivatives are approximated with high accuracy. In contrast to the finite-difference time domain (FDTD) methods, that are used commonly in PIC, the developed method does not produce numerical dispersion and does not involve grid staggering for the electric and magnetic fields. These features are especially valuable in modeling the wakefield acceleration of particles in plasmas.more » The proposed algorithm is implemented in the code PLARES-PIC, and the test simulations of laser plasma interactions are compared to the ones done with the quasi-cylindrical FDTD PIC code CALDER-CIRC.« less

  12. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  13. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    NASA Astrophysics Data System (ADS)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem

  14. X-Ray Spectra from MHD Simulations of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy D.; Noble, Scott C.; Krolik, Julian H.

    2011-01-01

    We present new global calculations of X-ray spectra from fully relativistic magneto-hydrodynamic (MHO) simulations of black hole (BH) accretion disks. With a self consistent radiative transfer code including Compton scattering and returning radiation, we can reproduce the predominant spectral features seen in decades of X-ray observations of stellar-mass BHs: a broad thermal peak around 1 keV, power-law continuum up to >100 keV, and a relativistically broadened iron fluorescent line. By varying the mass accretion rate, different spectral states naturally emerge: thermal-dominant, steep power-law, and low/hard. In addition to the spectral features, we briefly discuss applications to X-ray timing and polarization.

  15. Parallel Semi-Implicit Spectral Element Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Fournier, A.; Thomas, S.; Loft, R.

    2001-05-01

    The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI solver is essential to substantially increase this rate. Parallel preconditioning for an iterative conjugate-gradient elliptic solver is described. We are building a GCM dycore capable of 200 GF% lOPS sustained performance on clustered RISC/cache architectures using hybrid MPI/OpenMP programming.

  16. Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis

    PubMed Central

    Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.

    2016-01-01

    Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888

  17. Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Punnoose, J.; Xu, J.; Sisniega, A.

    2016-08-15

    Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less

  18. TH-AB-209-10: Breast Cancer Identification Through X-Ray Coherent Scatter Spectral Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapadia, A; Morris, R; Albanese, K

    Purpose: We have previously described the development and testing of a coherent-scatter spectral imaging system for identification of cancer. Our prior evaluations were performed using either tissue surrogate phantoms or formalin-fixed tissue obtained from pathology. Here we present the first results from a scatter imaging study using fresh breast tumor tissues obtained through surgical excision. Methods: A coherent-scatter imaging system was built using a clinical X-ray tube, photon counting detectors, and custom-designed coded-apertures. System performance was characterized using calibration phantoms of biological materials. Fresh breast tumors were obtained from patients undergoing mastectomy and lumpectomy surgeries for breast cancer. Each specimenmore » was vacuum-sealed, scanned using the scatter imaging system, and then sent to pathology for histological workup. Scatter images were generated separately for each tissue specimen and analyzed to identify voxels containing malignant tissue. The images were compared against histological analysis (H&E + pathologist identification of tumors) to assess the match between scatter-based and histological diagnosis. Results: In all specimens scanned, the scatter images showed the location of cancerous regions within the specimen. The detection and classification was performed through automated spectral matching without the need for manual intervention. The scatter spectra corresponding to cancer tissue were found to be in agreement with those reported in literature. Inter-patient variability was found to be within limits reported in literature. The scatter images showed agreement with pathologist-identified regions of cancer. Spatial resolution for this configuration of the scanner was determined to be 2–3 mm, and the total scan time for each specimen was under 15 minutes. Conclusion: This work demonstrates the utility of coherent scatter imaging in identifying cancer based on the scatter properties of the tissue. It presents the first results from coherent scatter imaging of fresh (unfixed) breast tissue using our coded-aperture scatter imaging approach for cancer identification.« less

  19. \\Space: A new code to estimate \\temp, \\logg, and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.

    2016-09-01

    \\Space is a FORTRAN95 code that derives stellar parameters and elemental abundances from stellar spectra. To derive these parameters, \\Space does not measure equivalent widths of lines nor it uses templates of synthetic spectra, but it employs a new method based on a library of General Curve-Of-Growths. To date \\Space works on the wavelength range 5212-6860 Å and 8400-8921 Å, and at the spectral resolution R=2000-20000. Extensions of these limits are possible. \\Space is a highly automated code suitable for application to large spectroscopic surveys. A web front end to this service is publicly available at http://dc.g-vo.org/SP_ACE together with the library and the binary code.

  20. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  1. Time-dependent jet flow and noise computations

    NASA Technical Reports Server (NTRS)

    Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.

    1990-01-01

    Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.

  2. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  3. Multiple copies of genes coding for electron transport proteins in the bacterium Nitrosomonas europaea.

    PubMed

    McTavish, H; LaQuier, F; Arciero, D; Logan, M; Mundfrom, G; Fuchs, J A; Hooper, A B

    1993-04-01

    The genome of Nitrosomonas europaea contains at least three copies each of the genes coding for hydroxylamine oxidoreductase (HAO) and cytochrome c554. A copy of an HAO gene is always located within 2.7 kb of a copy of a cytochrome c554 gene. Cytochrome P-460, a protein that shares very unusual spectral features with HAO, was found to be encoded by a gene separate from the HAO genes.

  4. Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules.

    PubMed

    Han, M; Gao, X; Su, J Z; Nie, S

    2001-07-01

    Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots (zinc sulfide-capped cadmium selenide nanocrystals) into polymeric microbeads at precisely controlled ratios. Their novel optical properties (e.g., size-tunable emission and simultaneous excitation) render these highly luminescent quantum dots (QDs) ideal fluorophores for wavelength-and-intensity multiplexing. The use of 10 intensity levels and 6 colors could theoretically code one million nucleic acid or protein sequences. Imaging and spectroscopic measurements indicate that the QD-tagged beads are highly uniform and reproducible, yielding bead identification accuracies as high as 99.99% under favorable conditions. DNA hybridization studies demonstrate that the coding and target signals can be simultaneously read at the single-bead level. This spectral coding technology is expected to open new opportunities in gene expression studies, high-throughput screening, and medical diagnostics.

  5. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  6. AOI 1— COMPUTATIONAL ENERGY SCIENCES:MULTIPHASE FLOW RESEARCH High-fidelity multi-phase radiation module for modern coal combustion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modest, Michael

    The effects of radiation in particle-laden flows were the object of the present research. The presence of particles increases optical thickness substantially, making the use of the “optically thin” approximation in most cases a very poor assumption. However, since radiation fluxes peak at intermediate optical thicknesses, overall radiative effects may not necessarily be stronger than in gas combustion. Also, the spectral behavior of particle radiation properties is much more benign, making spectral models simpler (and making the assumption of a gray radiator halfway acceptable, at least for fluidized beds when gas radiation is not large). On the other hand, particlesmore » scatter radiation, making the radiative transfer equation (RTE) much more di fficult to solve. The research carried out in this project encompassed three general areas: (i) assessment of relevant radiation properties of particle clouds encountered in fluidized bed and pulverized coal combustors, (ii) development of proper spectral models for gas–particulate mixtures for various types of two-phase combustion flows, and (iii) development of a Radiative Transfer Equation (RTE) solution module for such applications. The resulting models were validated against artificial cases since open literature experimental data were not available. The final models are in modular form tailored toward maximum portability, and were incorporated into two research codes: (i) the open-source CFD code OpenFOAM, which we have extensively used in our previous work, and (ii) the open-source multi-phase flow code MFIX, which is maintained by NETL.« less

  7. Iterative retrieval of surface emissivity and temperature for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1997-11-01

    The central problem of temperature-emissivity separation is that we obtain N spectral measurements of radiance and need to find N + 1 unknowns (N emissivities and one temperature). To solve this problem in the presence of the atmosphere we need to find even more unknowns: N spectral transmissions {tau}{sub atmo}({lambda}) up-welling path radiances L{sub path}{up_arrow}({lambda}) and N down-welling path radiances L{sub path}{down_arrow}({lambda}). Fortunately there are radiative transfer codes such as MODTRAN 3 and FASCODE available to get good estimates of {tau}{sub atmo}({lambda}), L{sub path}{up_arrow}({lambda}) and L{sub path}{down_arrow}({lambda}) in the order of a few percent. With the growing use of hyperspectralmore » imagers, e.g. AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. We believe that this will enable us to get around using the present temperature - emissivity separation (TES) algorithms using methods which take advantage of the many channels available in hyperspectral imagers. The first idea we had is to take advantage of the simple fact that a typical surface emissivity spectrum is rather smooth compared to spectral features introduced by the atmosphere. Thus iterative solution techniques can be devised which retrieve emissivity spectra {epsilon} based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  8. Seismic design parameters - A user guide

    USGS Publications Warehouse

    Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.

    2001-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM

  9. Spectral element simulation of precession driven flows in the outer cores of spheroidal planets

    NASA Astrophysics Data System (ADS)

    Vormann, Jan; Hansen, Ulrich

    2015-04-01

    A common feature of the planets in the solar system is the precession of the rotation axes, driven by the gravitational influence of another body (e.g. the Earth's moon). In a precessing body, the rotation axis itself is rotating around another axis, describing a cone during one precession period. Similar to the coriolis and centrifugal force appearing from the transformation to a rotating system, the addition of precession adds another term to the Navier-Stokes equation, the so called Poincaré force. The main geophysical motivation in studying precession driven flows comes from their ability to act as magnetohydrodynamic dynamos in planets and moons. Precession may either act as the only driving force or operate together with other forces such as thermochemical convection. One of the challenges in direct numerical simulations of such flows lies in the spheroidal shape of the fluid volume, which should not be neglected since it contributes an additional forcing trough pressure torques. Codes developed for the simulation of flows in spheres mostly use efficient global spectral algorithms that converge fast, but lack geometric flexibility, while local methods are usable in more complex shapes, but often lack high accuracy. We therefore adapted the spectral element code Nek5000, developed at Argonne National Laboratory, to the problem. The spectral element method is capable of solving for the flow in arbitrary geometries while still offering spectral convergence. We present first results for the simulation of a purely hydrodynamic, precession-driven flow in a spheroid with no-slip boundaries and an inner core. The driving by the Poincaré force is in a range where theoretical work predicts multiple solutions for a laminar flow. Our simulations indicate a transition to turbulent flows for Ekman numbers of 10-6 and lower.

  10. CONTINUUM INTENSITY AND [O i] SPECTRAL LINE PROFILES IN SOLAR 3D PHOTOSPHERIC MODELS: THE EFFECT OF MAGNETIC FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabbian, D.; Moreno-Insertis, F., E-mail: damian@iac.es, E-mail: fmi@iac.es

    2015-04-01

    The importance of magnetic fields in three-dimensional (3D) magnetoconvection models of the Sun’s photosphere is investigated in terms of their influence on the continuum intensity at different viewing inclination angles and on the intensity profile of two [O i] spectral lines. We use the RH numerical radiative transfer code to perform a posteriori spectral synthesis on the same time series of magnetoconvection models used in our publications on the effect of magnetic fields on abundance determination. We obtain a good match of the synthetic disk-center continuum intensity to the absolute continuum values from the Fourier Transform Spectrometer (FTS) observational spectrum; the matchmore » of the center-to-limb variation synthetic data to observations is also good, thanks, in part, to the 3D radiation transfer capabilities of the RH code. The different levels of magnetic flux in the numerical time series do not modify the quality of the match. Concerning the targeted [O i] spectral lines, we find, instead, that magnetic fields lead to nonnegligible changes in the synthetic spectrum, with larger average magnetic flux causing both of the lines to become noticeably weaker. The photospheric oxygen abundance that one would derive if instead using nonmagnetic numerical models would thus be lower by a few to several centidex. The inclusion of magnetic fields is confirmed to be important for improving the current modeling of the Sun, here in particular in terms of spectral line formation and of deriving consistent chemical abundances. These results may shed further light on the still controversial issue regarding the precise value of the solar oxygen abundance.« less

  11. Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.

    PubMed

    Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R

    2012-08-01

    Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.

  12. Hydrogen-deficient Central Stars of Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Todt, H.; Kniazev, A. Y.; Gvaramadze, V. V.; Hamann, W.-R.; Pena, M.; Graefener, G.; Buckley, D.; Crause, L.; Crawford, S. M.; Gulbis, A. A. S.; Hettlage, C.; Hooper, E.; Husser, T.-O.; Kotze, P.; Loaring, N.; Nordsieck, K. H.; O'Donoghue, D.; Pickering, T.; Potter, S.; Romero-Colmenero, E.; Vaisanen, P.; Williams, T.; Wolf, M.

    2015-06-01

    A significant number of the central stars of planetary nebulae (CSPNe) are hydrogen-deficient and are considered as the progenitors of H-deficient white dwarfs. Almost all of these H-deficient CSPNe show a chemical composition of helium, carbon, and oxygen. Most of them exhibit Wolf-Rayet-like emission line spectra and are therefore classified as of spectral type [WC]. In the last years, CSPNe of other Wolf-Rayet spectral subtypes have been identified, namely PB 8 (spectral type [WN/WC]), IC 4663 and Abell 48 (spectral type [WN]). We performed spectral analyses for a number of Wolf-Rayet type central stars of different evolutionary stages with the help of our Potsdam Wolf-Rayet (PoWR) model code for expanding atmospheres to determine relevant stellar parameters. The results of our recent analyses will be presented in the context of stellar evolution and white dwarf formation. Especially the problems of a uniform evolutionary channel for [WC] stars as well as constraints to the formation of [WN] or [WN/WC] subtype stars will be addressed.

  13. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  14. Accurate Atmospheric Parameters at Moderate Resolution Using Spectral Indices: Preliminary Application to the MARVELS Survey

    NASA Astrophysics Data System (ADS)

    Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin

    2014-12-01

    Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was performed with a subsample of 138 stars from the ELODIE stellar library, and the literature atmospheric parameters were recovered within 125 K for T eff, 0.10 dex for [Fe/H], and 0.29 dex for log g. These precisions are consistent with or better than those provided by the pipelines of surveys operating with similar resolutions. These results show that the spectral indices are a competitive tool to characterize stars with intermediate resolution spectra. Based on observations obtained with the 2.2 m MPG telescope at the European Southern Observatory (La Silla, Chile), under the agreement ESO-Observatório Nacional/MCT, and the Sloan Digital Sky Survey, which is owned and operated by the Astrophysical Research Consortium.

  15. THE BORN-AGAIN PLANETARY NEBULA A78: AN X-RAY TWIN OF A30

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toalá, J. A.; Guerrero, M. A.; Marquez-Lugo, R. A.

    We present the XMM-Newton discovery of X-ray emission from the planetary nebula (PN) A78, the second born-again PN detected in X-rays apart from A30. These two PNe share similar spectral and morphological characteristics: they harbor diffuse soft X-ray emission associated with the interaction between the H-poor ejecta and the current fast stellar wind and a point-like source at the position of the central star (CSPN). We present the spectral analysis of the CSPN, using for the first time an NLTE code for expanding atmospheres that takes line blanketing into account for the UV and optical spectra. The wind abundances aremore » used for the X-ray spectral analysis of the CSPN and the diffuse emission. The X-ray emission from the CSPN in A78 can be modeled by a single C VI emission line, while the X-ray emission from its diffuse component is better described by an optically thin plasma emission model with a temperature of kT = 0.088 keV (T ≈ 1.0 × 10{sup 6} K). We estimate X-ray luminosities in the 0.2-2.0 keV energy band of L {sub X,} {sub CSPN} = (1.2 ± 0.3) × 10{sup 31} erg s{sup –1} and L {sub X,} {sub DIFF} = (9.2 ± 2.3) × 10{sup 30} erg s{sup –1} for the CSPN and diffuse components, respectively.« less

  16. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  17. Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication

    NASA Astrophysics Data System (ADS)

    Niaz, M. T.; Imdad, F.; Kim, H. S.

    2017-01-01

    An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.

  18. Spectral variability of photospheric radiation due to faculae. I. The Sun and Sun-like stars

    NASA Astrophysics Data System (ADS)

    Norris, Charlotte M.; Beeck, Benjamin; Unruh, Yvonne C.; Solanki, Sami K.; Krivova, Natalie A.; Yeo, Kok Leng

    2017-09-01

    Context. Stellar spectral variability on timescales of a day and longer, arising from magnetic surface features such as dark spots and bright faculae, is an important noise source when characterising extra-solar planets. Current 1D models of faculae do not capture the geometric properties and fail to reproduce observed solar facular contrasts. Magnetoconvection simulations provide facular contrasts accounting for geometry. Aims: We calculate facular contrast spectra from magnetoconvection models of the solar photosphere with a view to improve (a) future parameter determinations for planets with early G type host stars and (b) reconstructions of solar spectral variability. Methods: Regions of a solar twin (G2, log g = 4.44) atmosphere with a range of initial average vertical magnetic fields (100 to 500 G) were simulated using a 3D radiation-magnetohydrodynamics code, MURaM, and synthetic intensity spectra were calculated from the ultraviolet (149.5 nm) to the far infrared (160 000 nm) with the ATLAS9 radiative transfer code. Nine viewing angles were investigated to account for facular positions across most of the stellar disc. Results: Contrasts of the radiation from simulation boxes with different levels of magnetic flux relative to an atmosphere with no magnetic field are a complicated function of position, wavelength and magnetic field strength that is not reproduced by 1D facular models. Generally, contrasts increase towards the limb, but at UV wavelengths a saturation and decrease are observed close to the limb. Contrasts also increase strongly from the visible to the UV; there is a rich spectral dependence, with marked peaks in molecular bands and strong spectral lines. At disc centre, a complex relationship with magnetic field was found and areas of strong magnetic field can appear either dark or bright, depending on wavelength. Spectra calculated for a wide variety of magnetic fluxes will also serve to improve total and spectral solar irradiance reconstructions.

  19. Simultaneous retrieval of water vapour, temperature and cirrus clouds properties from measurements of far infrared spectral radiance over the Antarctic Plateau

    NASA Astrophysics Data System (ADS)

    Di Natale, Gianluca; Palchetti, Luca; Bianchini, Giovanni; Del Guasta, Massimo

    2017-03-01

    The possibility separating the contributions of the atmospheric state and ice clouds by using spectral infrared measurements is a fundamental step to quantifying the cloud effect in climate models. A simultaneous retrieval of cloud and atmospheric parameters from infrared wideband spectra will allow the disentanglement of the spectral interference between these variables. In this paper, we describe the development of a code for the simultaneous retrieval of atmospheric state and ice cloud parameters, and its application to the analysis of the spectral measurements acquired by the Radiation Explorer in the Far Infrared - Prototype for Applications and Development (REFIR-PAD) spectroradiometer, which has been in operation at Concordia Station on the Antarctic Plateau since 2012. The code performs the retrieval with a computational time that is comparable with the instrument acquisition time. Water vapour and temperature profiles and the cloud optical and microphysical properties, such as the generalised effective diameter and the ice water path, are retrieved by exploiting the 230-980 cm-1 spectral band. To simulate atmospheric radiative transfer, the Line-By-Line Radiative Transfer Model (LBLRTM) has been integrated with a specifically developed subroutine based on the δ-Eddington two-stream approximation, whereas the single-scattering properties of cirrus clouds have been derived from a database for hexagonal column habits. In order to detect ice clouds, a backscattering and depolarisation lidar, co-located with REFIR-PAD has been used, allowing us to infer the position and the cloud thickness to be used in the retrieval. A climatology of the vertical profiles of water vapour and temperature has been performed by using the daily radiosounding available at the station at 12:00 UTC. The climatology has been used to build an a priori profile correlation to constrain the fitting procedure. An optimal estimation method with the Levenberg-Marquardt approach has been used to perform the retrieval. In most cases, the retrieved humidity and temperature profiles show a good agreement with the radiosoundings, demonstrating that the simultaneous retrieval of the atmospheric state is not biased by the presence of cirrus clouds. Finally, the retrieved cloud parameters allow us to study the relationships between cloud temperature and optical depth and between effective particle diameter and ice water content. These relationships are similar to the statistical correlations measured on the Antarctic coast at Dumont d'Urville and in the Arctic region.

  20. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.

  1. Representations of Pitch and Timbre Variation in Human Auditory Cortex

    PubMed Central

    2017-01-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255

  2. PyQuant: A Versatile Framework for Analysis of Quantitative Mass Spectrometry Data.

    PubMed

    Mitchell, Christopher J; Kim, Min-Sik; Na, Chan Hyun; Pandey, Akhilesh

    2016-08-01

    Quantitative mass spectrometry data necessitates an analytical pipeline that captures the accuracy and comprehensiveness of the experiments. Currently, data analysis is often coupled to specific software packages, which restricts the analysis to a given workflow and precludes a more thorough characterization of the data by other complementary tools. To address this, we have developed PyQuant, a cross-platform mass spectrometry data quantification application that is compatible with existing frameworks and can be used as a stand-alone quantification tool. PyQuant supports most types of quantitative mass spectrometry data including SILAC, NeuCode, (15)N, (13)C, or (18)O and chemical methods such as iTRAQ or TMT and provides the option of adding custom labeling strategies. In addition, PyQuant can perform specialized analyses such as quantifying isotopically labeled samples where the label has been metabolized into other amino acids and targeted quantification of selected ions independent of spectral assignment. PyQuant is capable of quantifying search results from popular proteomic frameworks such as MaxQuant, Proteome Discoverer, and the Trans-Proteomic Pipeline in addition to several standalone search engines. We have found that PyQuant routinely quantifies a greater proportion of spectral assignments, with increases ranging from 25-45% in this study. Finally, PyQuant is capable of complementing spectral assignments between replicates to quantify ions missed because of lack of MS/MS fragmentation or that were omitted because of issues such as spectra quality or false discovery rates. This results in an increase of biologically useful data available for interpretation. In summary, PyQuant is a flexible mass spectrometry data quantification platform that is capable of interfacing with a variety of existing formats and is highly customizable, which permits easy configuration for custom analysis. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  3. Spectral mapping tools from the earth sciences applied to spectral microscopy data.

    PubMed

    Harris, A Thomas

    2006-08-01

    Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.

  4. Proposed Reference Spectral Irradiance Standards to Improve Photovoltaic Concentrating System Design and Performance Evaluation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, D. R.; Emery, K. E.; Gueymard, C.

    2002-05-01

    This conference paper describes the American Society for Testing and Materials (ASTM), the International Electrotechnical Commission (IEC), and the International Standards Organization (ISO) standard solar terrestrial spectra (ASTM G-159, IEC-904-3, ISO 9845-1) provide standard spectra for photovoltaic performance applications. Modern terrestrial spectral radiation models and knowledge of atmospheric physics are applied to develop suggested revisions to update the reference spectra. We use a moderately complex radiative transfer model (SMARTS2) to produce the revised spectra. SMARTS2 has been validated against the complex MODTRAN radiative transfer code and spectral measurements. The model is proposed as an adjunct standard to reproduce the referencemore » spectra. The proposed spectra represent typical clear sky spectral conditions associated with sites representing reasonable photovoltaic energy production and weathering and durability climates. The proposed spectra are under consideration by ASTM.« less

  5. Constraining the Star-Formation and Metal-Enrichment Histories of Galaxies with the Next Generation Spectral Library

    NASA Astrophysics Data System (ADS)

    Heap, Sara

    2009-07-01

    Hubble's Next Generation Spectral Library {NGSL} comprises intermediate-resolution {R 1000} STIS spectra of 378 stars having a wide range in metallicity and age. Unique features of the NGSL include its broad wavelength coverage {1,800-10,100 ?} and high-S/N, absolute spectrophotometry. When incorporated in modern stellar population synthesis codes, the NGSL should enable us to constrain simultaneously the star-formation history and metal-enrichment history of galaxies over a wide redshift interval {z= 0-2}. In AR10659, we laid the foundation for tracing the spectral evolution of galaxies by putting the NGSL in order. We now propose to derive the atmospheric and fundamental parameters of the program stars, generate integrated spectra of stellar populations of different metallicities and initial mass functions, and derive spectral diagnostics of the age, metalllicity and E{B-V} of stellar populations.

  6. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  7. UniPOPS: Unified data reduction suite

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.; Garwood, Robert W.; Salter, Christopher J.; Stobie, Elizabeth B.; Cram, Thomas R.; Morgan, Lorrie; Vance, Bob; Hudson, Jerome

    2015-03-01

    UniPOPS, a suite of programs and utilities developed at the National Radio Astronomy Observatory (NRAO), reduced data from the observatory's single-dish telescopes: the Tucson 12-m, the Green Bank 140-ft, and archived data from the Green Bank 300-ft. The primary reduction programs, 'line' (for spectral-line reduction) and 'condar' (for continuum reduction), used the People-Oriented Parsing Service (POPS) as the command line interpreter. UniPOPS unified previous analysis packages and provided new capabilities; development of UniPOPS continued within the NRAO until 2004 when the 12-m was turned over to the Arizona Radio Observatory (ARO). The submitted code is version 3.5 from 2004, the last supported by the NRAO.

  8. Comparisons between vs30 and spectral response for 30 sites in Newcastle, Australia from collocated seismic cone penetrometer, active- and passive-source vs data

    USGS Publications Warehouse

    Volti, Theodora; Burbidge, David; Collins, Clive; Asten, Michael W.; Odum, Jackson K.; Stephenson, William J.; Pascal, Chris; Holzschuh, Josef

    2016-01-01

    Although the time‐averaged shear‐wave velocity down to 30 m depth (VS30) can be a proxy for estimating earthquake ground‐motion amplification, significant controversy exists about its limitations when used as a single parameter for the prediction of amplification. To examine this question in absence of relevant strong‐motion records, we use a range of different methods to measure the shear‐wave velocity profiles and the resulting theoretical site amplification factors (AFs) for 30 sites in the Newcastle area, Australia, in a series of blind comparison studies. The multimethod approach used here combines past seismic cone penetrometer and spectral analysis of surface‐wave data, with newly acquired horizontal‐to‐vertical spectral ratio, passive‐source surface‐wave spatial autocorrelation (SPAC), refraction microtremor (ReMi), and multichannel analysis of surface‐wave data. The various measurement techniques predicted a range of different AFs. The SPAC and ReMi techniques have the smallest overall deviation from the median AF for the majority of sites. We show that VS30 can be related to spectral response above a period T of 0.5 s but not necessarily with the maximum amplification according to the modeling done based on the measured shear‐wave velocity profiles. Both VS30 and AF values are influenced by the velocity ratio between bedrock and overlying sediments and the presence of surficial thin low‐velocity layers (<2  m thick and <150  m/s), but the velocity ratio is what mostly affects the AF. At 0.20.5  s do the amplification curves consistently show higher values for soft site classes and lower for hard classes.

  9. Divertor electron temperature and impurity diffusion measurements with a spectrally resolved imaging radiometer.

    PubMed

    Clayton, D J; Jaworski, M A; Kumar, D; Stutman, D; Finkenthal, M; Tritz, K

    2012-10-01

    A divertor imaging radiometer (DIR) diagnostic is being studied to measure spatially and spectrally resolved radiated power P(rad)(λ) in the tokamak divertor. A dual transmission grating design, with extreme ultraviolet (~20-200 Å) and vacuum ultraviolet (~200-2000 Å) gratings placed side-by-side, can produce coarse spectral resolution over a broad wavelength range covering emission from impurities over a wide temperature range. The DIR can thus be used to evaluate the separate P(rad) contributions from different ion species and charge states. Additionally, synthetic spectra from divertor simulations can be fit to P(rad)(λ) measurements, providing a powerful code validation tool that can also be used to estimate electron divertor temperature and impurity transport.

  10. A satellite mobile communication system based on Band-Limited Quasi-Synchronous Code Division Multiple Access (BLQS-CDMA)

    NASA Technical Reports Server (NTRS)

    Degaudenzi, R.; Elia, C.; Viola, R.

    1990-01-01

    Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.

  11. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  12. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  13. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  14. Adaptive Precoded MIMO for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Nabilla, A. F.; Tiong, T. C.

    2015-04-01

    Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.

  15. X-ray diffraction patterns and diffracted intensity of Kα spectral lines of He-like ions

    NASA Astrophysics Data System (ADS)

    Goyal, Arun; Khatri, Indu; Singh, A. K.; Sharma, Rinku; Mohan, Man

    2017-09-01

    In the present paper, we have calculated fine-structure energy levels related to the configurations 1s2s, 1s2p, 1s3s and 1s3p by employing GRASP2K code. We have also computed radiative data for transitions from 1s2p 1 P1o, 1s2p 3 P2o, 1s2p 3 P1o and 1s2s 3S1 to the ground state 1s2. We have made comparisons of our presented energy levels and transition wavelengths with available results compiled by NIST and good agreement is achieved. We have also provided X-ray diffraction (XRD) patterns of Kα spectral lines, namely w, x, y and z of Cu XXVIII, Kr XXXV and Mo with diffraction angle and maximum diffracted intensity which is not published elsewhere in the literature. We believe that our presented results may be beneficial in determination of the order parameter, X-ray crystallography, solid-state drug analysis, forensic science, geological and medical applications.

  16. An integrated toolbox for processing and analysis of remote sensing data of inland and coastal waters - atmospheric correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, J.F. de; Kokke, J.M.M.; Hoogenboom, H.J.

    1997-06-01

    Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air-water interface correction, and application of water quality algorithms. A prototype version of an integrated software environment has recently been developed that enables the user to perform and control these processing steps. Major parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code, (ii) a database of water quality algorithms, and (iii) a spectral library of Dutch coastal and inland waters, containing subsurface irradiance reflectance spectra and associated water quality parameters. The atmosphericmore » correction part of this environment is discussed here. It is shown that this part can be used to accurately retrieve spectral signatures of inland water for wavelengths between 450 and 750 nm, provided in situ measurements are used to determine atmospheric model parameters. Assessment of the usefulness of the completely integrated software system in an operational environment requires a revised version that is presently being developed.« less

  17. Development of an analytical-numerical model to predict radiant emission or absorption

    NASA Technical Reports Server (NTRS)

    Wallace, Tim L.

    1994-01-01

    The development of an analytical-numerical model to predict radiant emission or absorption is discussed. A voigt profile is assumed to predict the spectral qualities of a singlet atomic transition line for atomic species of interest to the OPAD program. The present state of this model is described in each progress report required under contract. Model and code development is guided by experimental data where available. When completed, the model will be used to provide estimates of specie erosion rates from spectral data collected from rocket exhaust plumes or other sources.

  18. Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.

    2002-01-01

    Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.

  19. Comparison of fundamental natural period of masonry and reinforced concrete buildings retrieved from experimental campaigns performed in Italy, Greece and Spain

    NASA Astrophysics Data System (ADS)

    Nigro, Antonella; Ponzo, Felice C.; Ditommaso, Rocco; Auletta, Gianluca; Iacovino, Chiara; Nigro, Domenico S.; Soupios, Pantelis; García-Fernández, Mariano; Jimenez, Maria-Jose

    2017-04-01

    Aim of this study is the experimental estimation of the dynamic characteristics of existing buildings and the comparison of the related fundamental natural period of the buildings (masonry and reinforced concrete) located in Basilicata (Italy), in Madrid (Spain) and in Crete (Greece). Several experimental campaigns, on different kind of structures all over the world, have been performed in the last years with the aim of proposing simplified relationships to evaluate the fundamental period of buildings. Most of formulas retrieved from experimental analyses provide vibration periods smaller than those suggested by the Italian Seismic Code (NTC2008) and the European Seismic Code (EC8). It is known that the fundamental period of a structure play a key role in the correct estimation of the spectral acceleration for seismic static analyses and to detect possible resonance phenomena with the foundation soil. Usually, simplified approaches dictate the use of safety factors greater than those related to in depth dynamic linear and nonlinear analyses with the aim to cover any unexpected uncertainties. The fundamental period calculated with the simplified formula given by both NTC 2008 and EC8 is higher than the fundamental period measured on the investigated structures in Italy, Spain and Greece. The consequence is that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear seismic static analyses. Based on numerical and experimental results, in order to confirm the results proposed in this work, authors suggest to increase the number of numerical and experimental tests considering also the effects of non-structural components and soil during small, medium and strong motion earthquakes. Acknowledgements This study was partially funded by the Italian Department of Civil Protection within the project DPC-RELUIS 2016 - RS4 ''Seismic observatory of structures and health monitoring'' and by the "Centre of Integrated Geomorphology for the Mediterranean Area - CGIAM" within the Framework Agreement with the University of Basilicata "Study, Research and Experimentation in the Field of Analysis and Monitoring of Seismic Vulnerability of Strategic and Relevant Buildings for the purposes of Civil Protection and Development of Innovative Strategies of Seismic Reinforcement".

  20. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  1. Development of low level 226Ra analysis for live fish using gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Chandani, Z.; Prestwich, W. V.; Byun, S. H.

    2017-06-01

    A low level 226Ra analysis method for live fish was developed using a 4π NaI(Tl) gamma-ray spectrometer. In order to find out the best algorithm for accomplishing the lowest detection limit, the gamma-ray spectrum from a 226Ra point was collected and nine different methods were attempted for spectral analysis. The lowest detection limit of 0.99 Bq for an hour counting occurred when the spectrum was integrated in the energy region of 50-2520 keV. To extend 226Ra analysis to live fish, a Monte Carlo simulation model with a cylindrical fish in a water container was built using the MCNP code. From simulation results, the spatial distribution of the efficiency and the efficiency correction factor for the live fish model were determined. The MCNP model will be able to be conveniently modified when a different fish or container geometry is employed as fish grow up in real experiments.

  2. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  3. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  4. PREFACE: Stellar Atmospheres in the Gaia Era - Preface

    NASA Astrophysics Data System (ADS)

    Lobel, Alex; De Greve, Jean-Pierre; Van Rensbergen, Walter

    2011-12-01

    Volume 328 (2011) of the Journal of Physics: Conference Series provides a record of the invited and contributed talks, and of the posters presented at the GREAT-ESF workshop entitled `Stellar Atmospheres in the Gaia Era: Quantitative Spectroscopy and Comparative Spectrum Modelling' (http://great-esf.oma.be and mirrored at http://spectri.freeshell.org/great-esf). The conference was held on 23-24 June 2011 at the Vrije Universiteit Brussel, Belgium. 47 scientists from 11 countries around the world attended the workshop. The ESA-Gaia satellite (launch mid 2013) will observe a billion stellar objects in the Galaxy and provide spectrophotometric and high-resolution spectra of an unprecedented number of stars observed with a space-based instrument. The confrontation of these data with theoretical models will significantly advance our understanding of the physics of stellar atmospheres. New stellar populations such as previously unknown emission line stars will be discovered, and fundamental questions such as the basic scenarios of stellar evolution will be addressed with Gaia data. The 33 presentations and 4 main discussion sessions at the workshop addressed important topics in spectrum synthesis methods and detailed line profile calculations urgently needed for accurate modelling of stellar spectra. It brought together leading scientists and students of the stellar physics communities investigating hot and cool star spectra. The scientific programme of the workshop consisted of 23 oral (6 invited) and 10 poster presentations about cool stars (first day; Comparative Spectrum Modelling and Quantitative Spectroscopy of Cool Stars), and hot stars (second day; Quantitative Spectroscopy of Hot Stars). The hot and cool stars communities use different spectrum modelling codes for determining basic parameters such as the effective temperature, surface gravity, iron abundance, and the chemical composition of stellar atmospheres. The chaired sessions of the first day highlighted new research results with spectral synthesis codes developed for cool stars, while the second day focused on codes applied for modeling the spectra of hot stars. The workshop addressed five major topics in stellar atmospheres research: Spectrum synthesis codes Radiation hydrodynamics codes Atmospheric parameters, abundance, metallicity, and chemical tagging studies Large spectroscopic surveys New atomic database The workshop presentations discussed various important scientific issues by comparing detailed model spectra to identify differences that can influence and bias the resulting atmospheric parameters. Theoretical line-blanketed model spectra were compared in detail to high-resolution spectroscopic observations. Stellar spectra computed (i.e., in the Gaia Radial Velocity Spectrometer wavelength range) with 1-D model atmosphere structures were mutually compared, but also to 3-D models from advanced radiation hydrodynamics codes. Atmospheric parameters derived from spectrum synthesis calculations assuming Local Thermodynamic Equilibrium (LTE) were evaluated against more sophisticated non-LTE models of metal-poor stars and the extended atmospheres of giants and supergiants. The workshop presented an overview of high-resolution synthetic spectral libraries of model spectra computed with the synthesis codes. The spectral model grids will be utilized to derive stellar parameters with the Discrete Source Classifier Algorithms currently under development in the Gaia DPAC consortium (http://www.rssd.esa.int/index.php?project=GAIA&page=DPAC_Introduction). They are implemented for training Gaia data analysis algorithms for the classification of a wide variety of hot and cool star types; FGK and M stars, OB stars, white dwarfs, red supergiants, peculiar A and B stars, carbon stars, ultra cool dwarfs, various types of emission line stars, Be stars, Wolf-Rayet stars, etc. A substantial number of oral and poster presentations discussed different techniques for measuring the abundance of various chemical elements from stellar spectra. The presented methods utilize spectra observed with large spectral dispersion, for example for accurately measuring iron, carbon, and nitrogen abundances. These methods are important for ongoing development and testing of automated and supervised algorithms for determining detailed chemical composition in tagging studies of large (chemo-dynamical) spectroscopic surveys planned to complement the Gaia (astrometric and kinematic) census of the Galaxy. The complete scientific programme is available here. The workshop website also offers the presentation viewgraphs (in PDF format) and some nice photographs of the talks and poster breaks http://great-esf.oma.be/program.php.

  5. Telemetry Standards, RCC Standard 106-17, Chapter 4, Pulse Code Modulation Standards

    DTIC Science & Technology

    2017-07-01

    Frame Structure .............................................................................................. 4-6 4.3.3 Cyclic Redundancy Check (Class...Spectral and BEP Comparisons for NRZ and Bi-phase............................................ A-3 A.4. PCM Frame Structure Examples...4-4 Figure 4-3. PCM Frame Structure .......................................................................................... 4-6

  6. Context of Carbonate Rocks in Heavily Eroded Martian Terrain

    NASA Image and Video Library

    2008-12-18

    The color coding on this CRISM composite image of an area on Mars is based on infrared spectral information interpreted as evidence of various minerals present. Carbonate, which is indicative of a wet and non-acidic history, occurs in very small patches.

  7. Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps

    NASA Astrophysics Data System (ADS)

    Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.

    2013-06-01

    Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.

  8. Composite hot subdwarf binaries - I. The spectroscopically confirmed sdB sample

    NASA Astrophysics Data System (ADS)

    Vos, Joris; Németh, Péter; Vučković, Maja; Østensen, Roy; Parsons, Steven

    2018-01-01

    Hot subdwarf-B (sdB) stars in long-period binaries are found to be on eccentric orbits, even though current binary-evolution theory predicts that these objects are circularized before the onset of Roche lobe overflow (RLOF). To increase our understanding of binary interaction processes during the RLOF phase, we started a long-term observing campaign to study wide sdB binaries. In this paper, we present a sample of composite binary sdBs, and the results of the spectral analysis of nine such systems. The grid search in stellar parameters (GSSP) code is used to derive atmospheric parameters for the cool companions. To cross-check our results and also to characterize the hot subdwarfs, we used the independent XTGRID code, which employs TLUSTY non-local thermodynamic equilibrium models to derive atmospheric parameters for the sdB component and PHOENIX synthetic spectra for the cool companions. The independent GSSP and XTGRID codes are found to show good agreement for three test systems that have atmospheric parameters available in the literature. Based on the rotational velocity of the companions, we make an estimate for the mass accreted during the RLOF phase and the minimum duration of that phase. We find that the mass transfer to the companion is minimal during the subdwarf formation.

  9. Exoplanet Atmospheres: From Light-Curve Analyses to Radiative-Transfer Modeling

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Rojo, Patricio; Stemm, Madison; Lust, Nathaniel B.; Foster, Andrew S.; Loredo, Thomas J.

    2015-01-01

    Multi-wavelength transit and secondary-eclipse light-curve observations are some of the most powerful techniques to probe the thermo-chemical properties of exoplanets. Although the small planet-to-star constrast ratios demand a meticulous data analysis, and the limited available spectral bands can further restrain constraints, a Bayesian approach can robustly reveal what constraints can we set, given the data.We review the main aspects considered during the analysis of Spitzer time-series data by our group with an aplication to WASP-8b and TrES-1. We discuss the applicability and limitations of the most commonly used correlated-noise estimators. We describe our open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART calculates the planetary emission or transmission spectrum by solving a 1D line-by-line radiative-transfer equation. The generated spectra are integrated over determined bandpasses for comparison to the data. Coupled to our Multi-core Markov-chain Monte Carlo (MC3) statistical package, BART constrains the temperature profile and chemical abundances in the planet's atmosphere. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances.This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  10. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO formore » structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation« less

  11. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE PAGES

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia; ...

    2017-08-01

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  12. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  13. Verification of unfold error estimates in the unfold operator code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less

  14. Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.

    PubMed

    Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven

    2014-11-01

    Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without affecting the axial resolution.

  15. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  16. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  17. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  18. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  19. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  20. Theoretical Study of Radiation from a Broad Range of Impurity Ions for Magnetic Fusion Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safronova, Alla

    Spectroscopy of radiation emitted by impurities plays an important role in the study of magnetically confined fusion plasmas. The measurements of these impurities are crucial for the control of the general machine conditions, for the monitoring of the impurity levels, and for the detection of various possible fault conditions. Low-Z impurities, typically present in concentrations of 1%, are lithium, beryllium, boron, carbon, and oxygen. Some of the common medium-Z impurities are metals such as iron, nickel, and copper, and high-Z impurities, such as tungsten, are present in smaller concentrations of 0.1% or less. Despite the relatively small concentration numbers, themore » aforementioned impurities might make a substantial contribution to radiated power, and also influence both plasma conditions and instruments. A detailed theoretical study of line radiation from impurities that covers a very broad spectral range from less than 1 Å to more than 1000 Å has been accomplished and the results were applied to the LLNL Electron Beam Ion Trap (EBIT) and the Sustained Spheromak Physics Experiment (SSPX) and to the National Spherical Torus Experiment (NSTX) at Princeton. Though low- and medium-Z impurities were also studied, the main emphasis was made on the comprehensive theoretical study of radiation from tungsten using different state-of-the-art atomic structure codes such as Relativistic Many-Body Perturbation Theory (RMBPT). The important component of this research was a comparison of the results from the RMBPT code with other codes such as the Multiconfigurational Hartree–Fock developed by Cowan (COWAN code) and the Multiconfiguration Relativistic Hebrew University Lawrence Atomic Code (HULLAC code), and estimation of accuracy of calculations. We also have studied dielectronic recombination, an important recombination process for fusion plasma, for variety of highly and low charged tungsten ions using COWAN and HULLAC codes. Accurate DR rate coefficients are needed for describing the ionization balance of plasmas, which in turn determines the lines contributing to the spectral emission and the radiative power loss. In particular, we have calculated relativistic atomic data and corresponding dielectronic satellite spectra of highly ionized W ions, such as, for example, Li-like W (with the shortest wavelength of x-ray radiation of about 0.2 Å) that might exist in ITER core plasmas at very high temperatures of 30-40 keV. In addition, we have completed relativistic calculations of low ionized W ions from Lu-like (W3+) to Er-like (W6+) and for Sm-like(W12+) and Pm-like (W13+) that cover a spectral range from few hundred to thousand Å and are more relevant to the edge plasma diagnostics in tokamak.« less

  1. Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection.

    PubMed

    Zhu, Xiaofeng; Li, Xuelong; Zhang, Shichao; Ju, Chunhua; Wu, Xindong

    2017-06-01

    In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of k -nearest neighbor classification performance.

  2. Spatially-Dependent Modelling of Pulsar Wind Nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-03-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multi-zone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially-dependent B-field, spatially-dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  3. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  4. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  5. Magnet system optimization for segmented adaptive-gap in-vacuum undulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitegi, C., E-mail: ckitegi@bnl.gov; Chubar, O.; Eng, C.

    2016-07-27

    Segmented Adaptive Gap in-vacuum Undulator (SAGU), in which different segments have different gaps and periods, promises a considerable spectral performance gain over a conventional undulator with uniform gap and period. According to calculations, this gain can be comparable to the gain achievable with a superior undulator technology (e.g. a room-temperature in-vacuum hybrid SAGU would perform as a cryo-cooled hybrid in-vacuum undulator with uniform gap and period). However, for reaching the high spectral performance, SAGU magnetic design has to include compensation of kicks experienced by the electron beam at segment junctions because of different deflection parameter values in the segments. Wemore » show that such compensation to large extent can be accomplished by using a passive correction, however, simple correction coils are nevertheless required as well to reach perfect compensation over a whole SAGU tuning range. Magnetic optimizations performed with Radia code, and the resulting undulator radiation spectra calculated using SRW code, demonstrating a possibility of nearly perfect correction, are presented.« less

  6. Dust emission in simulated dwarf galaxies using GRASIL-3D

    NASA Astrophysics Data System (ADS)

    Santos-Santos, I. M.; Domínguez-Tenreiro, R.; Granato, G. L.; Brook, C. B.; Obreja, A.

    2017-03-01

    Recent Herschel observations of dwarf galaxies have shown a wide diversity in the shapes of their IR-submm spectral energy distributions as compared to more massive galaxies, presenting features that cannot be explained with the current models. In order to understand the physics driving these differences, we have computed the emission of a sample of simulated dwarf galaxies using the radiative transfer code GRASIL-3D. This code separately treats the radiative transfer in dust grains from molecular clouds and cirri. The simulated galaxies have masses ranging from 10^6-10^9 M_⊙ and have evolved within a Local Group environment by using CLUES initial conditions. We show that their IR band luminosities are in agreement with observations, with their SEDs reproducing naturally the particular spectral features observed. We conclude that the GRASIL-3D two-component model gives a physical interpretation to the emission of dwarf galaxies, with molecular clouds (cirri) as the warm (cold) dust components needed to recover observational data.

  7. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  8. Spatially dependent modelling of pulsar wind nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-07-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multizone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially dependent B-field, spatially dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  9. 3D-MHD Simulations of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Forest, C. B.; Wright, J. C.; O'Connell, R.

    2003-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations are used to predict behavior of the experiment. The code solves the self-consistent full evolution of the magnetic and velocity fields. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James [Proc. R. Soc. Lond. A 425. 407-429 (1989)]. Initial results indicate that saturation of the magnetic field occurs so that the resulting perturbed backreaction of the induced magnetic field changes the velocity field such that it would no longer be linearly unstable, suggesting non-linear terms are necessary for explaining the resulting state. Saturation and self-excitation depend in detail upon the magnetic Prandtl number.

  10. Numerical modelling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.; Truitt, J. L.

    2000-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a newly developed 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to the experimental results obtained from the experiment. The code, Dynamo, is in Fortran90 and allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the Navier-Stokes equation governing V are solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependant kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Initial results on magnetic field saturation, generated by the simultaneous evolution of magnetic and velocity fields be presented using a variety of mechanical forcing terms.

  11. Time-resolved hard x-ray spectrometer

    NASA Astrophysics Data System (ADS)

    Moy, Kenneth; Cuneo, Michael; McKenna, Ian; Keenan, Thomas; Sanford, Thomas; Mock, Ray

    2006-08-01

    Wired array studies are being conducted at the SNL Z accelerator to maximize the x-ray generation for inertial confinement fusion targets and high energy density physics experiments. An integral component of these studies is the characterization of the time-resolved spectral content of the x-rays. Due to potential spatial anisotropy in the emitted radiation, it is also critical to diagnose the time-evolved spectral content in a space-resolved manner. To accomplish these two measurement goals, we developed an x-ray spectrometer using a set of high-speed detectors (silicon PIN diodes) with a collimated field-of-view that converged on a 1-cm-diameter spot at the pinch axis. Spectral discrimination is achieved by placing high Z absorbers in front of these detectors. We built two spectrometers to permit simultaneous different angular views of the emitted radiation. Spectral data have been acquired from recent Z shots for the radial and axial (polar) views. UNSPEC 1 has been adapted to analyze and unfold the measured data to reconstruct the x-ray spectrum. The unfold operator code, UFO2, is being adapted for a more comprehensive spectral unfolding treatment.

  12. Spectral definition of the ArTeMiS instrument

    NASA Astrophysics Data System (ADS)

    Haynes, Vic; Maffei, Bruno; Pisano, Giampaolo; Dubreuil, Didier; Delisle, Cyrille; Le Pennec, Jean; Hurtado, Norma

    2014-07-01

    ArTeMiS is a sub-millimetre camera to be operated, on the Atacama Pathfinder Experiment Telescope (APEX). The ultimate goal is to observe simultaneously in three atmospheric spectral windows in the region of 200, 350 and 450 microns. We present the filtering scheme, which includes the cryostat window, thermal rejection elements, band separation and spectral isolation, which has been adopted for this instrument. This was achieved using a combination of scattering, Yoshinaga filters, organic dyes and Ulrich type embedded metallic mesh devices. Design of the quasi-optical mesh components has been developed by modelling with an in-house developed code. For the band separating dichroics, which are used with an incidence angle of 35 deg, further modelling has been performed with HFSS (Ansoft). Spectral characterization of the components for the 350 and 450 bands have been performed with a Martin-Puplett Polarizing Fourier Transform Spectrometer. While for the first commissioning and observation campaign, one spectral band only was operational (350 microns), we report on the design of the 200, 350 and 450 micron bands.

  13. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai

    2017-01-27

    Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.

  14. Wall-resolved spectral cascade-transport turbulence model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Shaver, D. R.; Lahey, R. T.

    A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less

  15. Effects of deep basins on structural collapse during large subduction earthquakes

    USGS Publications Warehouse

    Marafi, Nasser A.; Eberhard, Marc O.; Berman, Jeffrey W.; Wirth, Erin A.; Frankel, Arthur

    2017-01-01

    Deep sedimentary basins are known to increase the intensity of ground motions, but this effect is implicitly considered in seismic hazard maps used in U.S. building codes. The basin amplification of ground motions from subduction earthquakes is particularly important in the Pacific Northwest, where the hazard at long periods is dominated by such earthquakes. This paper evaluates the effects of basins on spectral accelerations, ground-motion duration, spectral shape, and structural collapse using subduction earthquake recordings from basins in Japan that have similar depths as the Puget Lowland basin. For three of the Japanese basins and the Puget Lowland basin, the spectral accelerations were amplified by a factor of 2 to 4 for periods above 2.0 s. The long-duration subduction earthquakes and the effects of basins on spectral shape combined, lower the spectral accelerations at collapse for a set of building archetypes relative to other ground motions. For the hypothetical case in which these motions represent the entire hazard, the archetypes would need to increase up to 3.3 times its strength to compensate for these effects.

  16. Wall-resolved spectral cascade-transport turbulence model

    DOE PAGES

    Brown, C. S.; Shaver, D. R.; Lahey, R. T.; ...

    2017-07-08

    A spectral cascade-transport model has been developed and applied to turbulent channel flows (Reτ= 550, 950, and 2000 based on friction velocity, uτ ; or ReδΜ= 8,500; 14,800 and 31,000, based on the mean velocity and channel half-width). This model is an extension of a spectral model previously developed for homogeneous single and two-phase decay of isotropic turbulence and uniform shear flows; and a spectral turbulence model for wall-bounded flows without resolving the boundary layer. Data from direct numerical simulation (DNS) of turbulent channel flow was used to help develop this model and to assess its performance in the 1Dmore » direction across the channel width. The resultant spectral model is capable of predicting the mean velocity, turbulent kinetic energy and energy spectrum distributions for single-phase wall-bounded flows all the way to the wall, where the model source terms have been developed to account for the wall influence. We implemented the model into the 3D multiphase CFD code NPHASE-CMFD and the latest results are within reasonable error of the 1D predictions.« less

  17. Spectroscopics database for warm Xenon and Iron in Astrophysics and Laboratory Astrophysics conditions

    NASA Astrophysics Data System (ADS)

    Busquet, Michel; Klapisch, Marcel; Bar-Shalom, Avi; Oreg, Josse

    2010-11-01

    The main contribution to spectral properties of astrophysics mixtures come often from Iron. On the other hand, in the so-called domain of ``Laboratory Astrophysics,'' where astrophysics phenomena are scaled down to the laboratory, Xenon (and Argon) are commonly used gases. At so called ``warm'' temperatures (T=5-50eV), L-shell Iron and M-shell Xenon present a very large number of spectral lines, originating from billions of levels. More often than not, Local Thermodynamical Equilibrium is assumed, leading to noticeable simplification of the computation. Nevertheless, complex and powerful atomic structure codes are required. We take benefit of powerful statistics and numerics, included in our atomic structure codes, STA[1] and HULLAC[2], to generate the required spectra. Recent improvements in both fields (statistics, numerics and convergence control) allow obtaining large databases (ro x T grid of > 200x200 points, and > 10000 frequencies) for temperature down to a few eV. We plan to port these improvements in the NLTE code SCROLL[3]. [1] A.Bar-Shalom, et al, Phys. Rev. A 40, 3183 (1989) [2] M.Busquet,et al, J.Phys. IV France 133, 973-975 (2006); A.Bar-Shalom, M.Klapisch, J.Oreg, J.Oreg, JQSRT 71, 169, (2001) [3] A.Bar-Shalom, et al, Phys. Rev. E 56, R70 (1997)

  18. Probabilistic seismic hazard zonation for the Cuban building code update

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Llanes-Buron, C.

    2013-05-01

    A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.

  19. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach

    PubMed Central

    Gontier, Félix; Lagrange, Mathieu; Can, Arnaud; Lavandier, Catherine

    2017-01-01

    The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1) the estimation of standard acoustic indicators; and (2) the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens. PMID:29186021

  20. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  1. Curatr: a web application for creating, curating and sharing a mass spectral library.

    PubMed

    Palmer, Andrew; Phapale, Prasad; Fay, Dominik; Alexandrov, Theodore

    2018-04-15

    We have developed a web application curatr for the rapid generation of high quality mass spectral fragmentation libraries from liquid-chromatography mass spectrometry datasets. Curatr handles datasets from single or multiplexed standards and extracts chromatographic profiles and potential fragmentation spectra for multiple adducts. An intuitive interface helps users to select high quality spectra that are stored along with searchable molecular information, the providence of each standard and experimental metadata. Curatr supports exports to several standard formats for use with third party software or submission to repositories. We demonstrate the use of curatr to generate the EMBL Metabolomics Core Facility spectral library http://curatr.mcf.embl.de. Source code and example data are at http://github.com/alexandrovteam/curatr/. palmer@embl.de. Supplementary data are available at Bioinformatics online.

  2. Apparatus and system for multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2003-06-24

    An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.

  3. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  4. Analysis of spectrally resolved autofluorescence images by support vector machines

    NASA Astrophysics Data System (ADS)

    Mateasik, A.; Chorvat, D.; Chorvatova, A.

    2013-02-01

    Spectral analysis of the autofluorescence images of isolated cardiac cells was performed to evaluate and to classify the metabolic state of the cells in respect to the responses to metabolic modulators. The classification was done using machine learning approach based on support vector machine with the set of the automatically calculated features from recorded spectral profile of spectral autofluorescence images. This classification method was compared with the classical approach where the individual spectral components contributing to cell autofluorescence were estimated by spectral analysis, namely by blind source separation using non-negative matrix factorization. Comparison of both methods showed that machine learning can effectively classify the spectrally resolved autofluorescence images without the need of detailed knowledge about the sources of autofluorescence and their spectral properties.

  5. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  6. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  7. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  8. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  9. Silicon Drift Detector response function for PIXE spectra fitting

    NASA Astrophysics Data System (ADS)

    Calzolai, G.; Tapinassi, S.; Chiari, M.; Giannoni, M.; Nava, S.; Pazzi, G.; Lucarelli, F.

    2018-02-01

    The correct determination of the X-ray peak areas in PIXE spectra by fitting with a computer program depends crucially on accurate parameterization of the detector peak response function. In the Guelph PIXE software package, GUPIXWin, one of the most used PIXE spectra analysis code, the response of a semiconductor detector to monochromatic X-ray radiation is described by a linear combination of several analytical functions: a Gaussian profile for the X-ray line itself, and additional tail contributions (exponential tails and step functions) on the low-energy side of the X-ray line to describe incomplete charge collection effects. The literature on the spectral response of silicon X-ray detectors for PIXE applications is rather scarce, in particular data for Silicon Drift Detectors (SDD) and for a large range of X-ray energies are missing. Using a set of analytical functions, the SDD response functions were satisfactorily reproduced for the X-ray energy range 1-15 keV. The behaviour of the parameters involved in the SDD tailing functions with X-ray energy is described by simple polynomial functions, which permit an easy implementation in PIXE spectra fitting codes.

  10. Experimental characterization of an ultra-fast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuba, J; Slaughter, D R; Fittinghoff, D N

    We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less

  11. Mapping of hydrothermally altered rocks using airborne multispectral scanner data, Marysvale, Utah, mining district

    USGS Publications Warehouse

    Podwysocki, M.H.; Segal, D.B.; Jones, O.D.

    1983-01-01

    Multispectral data covering an area near Marysvale, Utah, collected with the airborne National Aeronautics and Space Administration (NASA) 24-channel Bendix multispectral scanner, were analyzed to detect areas of hydrothermally altered, potentially mineralized rocks. Spectral bands were selected for analysis that approximate those of the Landsat 4 Thematic Mapper and which are diagnostic of the presence of hydrothermally derived products. Hydrothermally altered rocks, particularly volcanic rocks affected by solutions rich in sulfuric acid, are commonly characterized by concentrations of argillic minerals such as alunite and kaolinite. These minerals are important for identifying hydrothermally altered rocks in multispectral images because they have intense absorption bands centered near a wavelength of 2.2 ??m. Unaltered volcanic rocks commonly do not contain these minerals and hence do not have the absorption bands. A color-composite image was constructed using the following spectral band ratios: 1.6??m/2.2??m, 1.6??m/0.48??m, and 0.67??m/1.0??m. The particular bands were chosen to emphasize the spectral contrasts that exist for argillic versus non-argillic rocks, limonitic versus nonlimonitic rocks, and rocks versus vegetation, respectively. The color-ratio composite successfully distinguished most types of altered rocks from unaltered rocks. Some previously unrecognized areas of hydrothermal alteration were mapped. The altered rocks included those having high alunite and/or kaolinite content, siliceous rocks containing some kaolinite, and ash-fall tuffs containing zeolitic minerals. The color-ratio-composite image allowed further division of these rocks into limonitic and nonlimonitic phases. The image did not allow separation of highly siliceous or hematitically altered rocks containing no clays or alunite from unaltered rocks. A color-coded density slice image of the 1.6??m/2.2??m band ratio allowed further discrimination among the altered units. Areas containing zeolites and some ash-fall tuffs containing montmorillonite were readily recognized on the color-coded density slice as having less intense 2.2-??m absorption than areas of highly altered rocks. The areas of most intense absorption, as depicted in the color-coded density slice, are dominated by highly altered rocks containing large amounts of alunite and kaolinite. These areas form an annulus, approximately 10 km in diameter, which surrounds a quartz monzonite intrusive body of Miocene age. The patterns of most intense alteration are interpreted as the remnants of paleohydrothermal convective cells set into motion during the emplacement of the central intrusive body. ?? 1983.

  12. A STUDY OF THE X-RAYED OUTFLOW OF APM 08279+5255 THROUGH PHOTOIONIZATION CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saez, Cristian; Chartas, George, E-mail: saez@astro.psu.edu, E-mail: chartasg@cofc.edu

    2011-08-20

    We present new results from our study of the X-rayed outflow of the z = 3.91 gravitationally lensed broad absorption line quasar APM 08279+5255. These results are based on spectral fits to all the long exposure observations of APM 08279+5255 using a new quasar-outflow model. This model is based on CLOUDY{sup 3} CLOUDY is a photoionization code designed to simulate conditions in interstellar matter under a broad range of conditions. We have used version 08.00 of the code last described by Ferland et al. (1998). The atomic database used by CLOUDY is described in Ferguson et al. (2001) and http://www.pa.uky.edu/{approx}verner/atom.html.more » simulations of a near-relativistic quasar outflow. The main conclusions from our multi-epoch spectral re-analysis of Chandra, XMM-Newton, and Suzaku observations of APM 08279+5255 are the following. (1) In every observation, we confirm the presence of two strong features, one at rest-frame energies between 1-4 keV and the other between 7-18 keV. (2) We confirm that the low-energy absorption (1-4 keV rest frame) arises from a low-ionization absorber with log(N{sub H}/cm{sup -2}) {approx} 23 and the high-energy absorption (7-18 keV rest frame) arises from highly ionized (3 {approx}< log {xi} {approx}< 4, where {xi} is the ionization parameter) iron in a near-relativistic outflowing wind. Assuming this interpretation, we find that the velocities on the outflow could get up to {approx}0.7c. (3) We confirm a correlation between the maximum outflow velocity and the photon index and find possible trends between the maximum outflow velocity and the X-ray luminosity, and between the total column density and the photon index. We performed calculations of the force multipliers of material illuminated by absorbed power laws and a Mathews-Ferland spectral energy distribution (SED). We found that variations of the X-ray and UV parts of the SEDs and the presence of a moderate absorbing shield will produce important changes in the strength of the radiative driving force. These results support the observed trend found between the outflow velocity and X-ray photon index in APM 08279+5255. If this result is confirmed it will imply that radiation pressure is an important mechanism in producing quasar outflows.« less

  13. Aerosol-Cloud-Radiation Interactions in Atmospheric Forecast Models

    DTIC Science & Technology

    2005-09-14

    results also suggest that neglect of spectral skewness and drizzle drops as typically in calculating k [e.g., Pontikis and Hicks, 1992; Martin, et...Intercomparison among different numerical codes, Bull. Amer. Meteor. Soc., 77, 261-278. 10 Pontikis , C., and E. Hicks (1992), Contribution to the

  14. Signal-independent timescale analysis (SITA) and its application for neural coding during reaching and walking.

    PubMed

    Zacksenhouse, Miriam; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2014-01-01

    What are the relevant timescales of neural encoding in the brain? This question is commonly investigated with respect to well-defined stimuli or actions. However, neurons often encode multiple signals, including hidden or internal, which are not experimentally controlled, and thus excluded from such analysis. Here we consider all rate modulations as the signal, and define the rate-modulations signal-to-noise ratio (RM-SNR) as the ratio between the variance of the rate and the variance of the neuronal noise. As the bin-width increases, RM-SNR increases while the update rate decreases. This tradeoff is captured by the ratio of RM-SNR to bin-width, and its variations with the bin-width reveal the timescales of neural activity. Theoretical analysis and simulations elucidate how the interactions between the recovery properties of the unit and the spectral content of the encoded signals shape this ratio and determine the timescales of neural coding. The resulting signal-independent timescale analysis (SITA) is applied to investigate timescales of neural activity recorded from the motor cortex of monkeys during: (i) reaching experiments with Brain-Machine Interface (BMI), and (ii) locomotion experiments at different speeds. Interestingly, the timescales during BMI experiments did not change significantly with the control mode or training. During locomotion, the analysis identified units whose timescale varied consistently with the experimentally controlled speed of walking, though the specific timescale reflected also the recovery properties of the unit. Thus, the proposed method, SITA, characterizes the timescales of neural encoding and how they are affected by the motor task, while accounting for all rate modulations.

  15. Nimbus-7 ERB Solar Analysis Tape (ESAT) user's guide

    NASA Technical Reports Server (NTRS)

    Major, Eugene; Hickey, John R.; Kyle, H. Lee; Alton, Bradley M.; Vallette, Brenda J.

    1988-01-01

    Seven years and five months of Nimbus-7 Earth Radiation Budget (ERB) solar data are available on a single ERB Solar Analysis Tape (ESAT). The period covered is November 16, 1978 through March 31, 1986. The Nimbus-7 satellite performs approximately 14 orbits per day and the ERB solar telescope observes the sun once per orbit as the satellite crosses the southern terminator. The solar data were carefully calibrated and screened. Orbital and daily mean values are given for the total solar irradiance plus other spectral intervals (10 solar channels in all). In addition, selected solar activity indicators are included on the ESAT. The ESAT User's Guide is an update of the previous ESAT User's Guide (NASA TM 86143) and includes more detailed information on the solar data calibration, screening procedures, updated solar data plots, and applications to solar variability. Details of the tape format, including source code to access ESAT, are included.

  16. Trophic classification of selected Colorado lakes

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.; Boland, D. H. P.

    1979-01-01

    Multispectral scanner data, acquired over several Colorado lakes using LANDSAT-1 and aircraft, were used in conjunction with contact-sensed water quality data to determine the feasibility of assessing lacustrine trophic levels. A trophic state index was developed using contact-sensed data for several trophic indicators. Relationships between the digitally processed multispectral scanner data, several trophic indicators, and the trophic index were examined using a supervised multispectral classification technique and regression techniques. Statistically significant correlations exist between spectral bands, several of the trophic indicators and the trophic state index. Color-coded photomaps were generated which depict the spectral aspects of trophic state.

  17. PSRPOPPy: an open-source package for pulsar population simulations

    NASA Astrophysics Data System (ADS)

    Bates, S. D.; Lorimer, D. R.; Rane, A.; Swiggum, J.

    2014-04-01

    We have produced a new software package for the simulation of pulsar populations, PSRPOPPY, based on the PSRPOP package. The codebase has been re-written in Python (save for some external libraries, which remain in their native Fortran), utilizing the object-oriented features of the language, and improving the modularity of the code. Pre-written scripts are provided for running the simulations in `standard' modes of operation, but the code is flexible enough to support the writing of personalised scripts. The modular structure also makes the addition of experimental features (such as new models for period or luminosity distributions) more straightforward than with the previous code. We also discuss potential additions to the modelling capabilities of the software. Finally, we demonstrate some potential applications of the code; first, using results of surveys at different observing frequencies, we find pulsar spectral indices are best fitted by a normal distribution with mean -1.4 and standard deviation 1.0. Secondly, we model pulsar spin evolution to calculate the best fit for a relationship between a pulsar's luminosity and spin parameters. We used the code to replicate the analysis of Faucher-Giguère & Kaspi, and have subsequently optimized their power-law dependence of radio luminosity, L, with period, P, and period derivative, Ṗ. We find that the underlying population is best described by L ∝ P-1.39±0.09 Ṗ0.48±0.04 and is very similar to that found for γ-ray pulsars by Perera et al. Using this relationship, we generate a model population and examine the age-luminosity relation for the entire pulsar population, which may be measurable after future large-scale surveys with the Square Kilometre Array.

  18. Quantitative subpixel spectral detection of targets in multispectral images. [terrestrial and planetary surfaces

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Adams, John B.; Smith, Milton O.

    1992-01-01

    The conditions that affect the spectral detection of target materials at the subpixel scale are examined. Two levels of spectral mixture analysis for determining threshold detection limits of target materials in a spectral mixture are presented, the cases where the target is detected as: (1) a component of a spectral mixture (continuum threshold analysis) and (2) residuals (residual threshold analysis). The results of these two analyses are compared under various measurement conditions. The examples illustrate the general approach that can be used for evaluating the spectral detectability of terrestrial and planetary targets at the subpixel scale.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Gareth C.; Pessah, Martin E., E-mail: gmurphy@nbi.dk, E-mail: mpessah@nbi.dk

    The magnetorotational instability (MRI) is thought to play an important role in enabling accretion in sufficiently ionized astrophysical disks. The rate at which MRI-driven turbulence transports angular momentum is intimately related to both the strength of the amplitudes of the fluctuations on various scales and the degree of anisotropy of the underlying turbulence. This has motivated several studies to characterize the distribution of turbulent power in spectral space. In this paper we investigate the anisotropic nature of MRI-driven turbulence using a pseudo-spectral code and introduce novel ways for providing a robust characterization of the underlying turbulence. We study the growth ofmore » the MRI and the subsequent transition to turbulence via parasitic instabilities, identifying their potential signature in the late linear stage. We show that the general flow properties vary in a quasi-periodic way on timescales comparable to ∼10 inverse angular frequencies, motivating the temporal analysis of its anisotropy. We introduce a 3D tensor invariant analysis to quantify and classify the evolution of the anisotropy of the turbulent flow. This analysis shows a continuous high level of anisotropy, with brief sporadic transitions toward two- and three-component isotropic turbulent flow. This temporal-dependent anisotropy renders standard shell averaging especially when used simultaneously with long temporal averages, inadequate for characterizing MRI-driven turbulence. We propose an alternative way to extract spectral information from the turbulent magnetized flow, whose anisotropic character depends strongly on time. This consists of stacking 1D Fourier spectra along three orthogonal directions that exhibit maximum anisotropy in Fourier space. The resulting averaged spectra show that the power along each of the three independent directions differs by several orders of magnitude over most scales, except the largest ones. Our results suggest that a first-principles theory to describe fully developed MRI-driven turbulence will likely have to consider the anisotropic nature of the flow at a fundamental level.« less

  20. Full Wave Analysis of RF Signal Attenuation in a Lossy Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2004-12-06

    We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less

  1. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  2. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  3. Improvement of single wavelength-based Thai jasmine rice identification with elliptic Fourier descriptor and neural network analysis

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2012-11-01

    Instead of considering only the amount of fluorescent signal spatially distributed on the image of milled rice grains this paper shows how our single-wavelength spectral-imaging-based Thai jasmine (KDML105) rice identification system can be improved by analyzing the shape and size of the image of each milled rice variety especially during the image threshold operation. The image of each milled rice variety is expressed as chain codes and elliptic Fourier coefficients. After that, a feed-forward back-propagation neural network model is applied, resulting in an improved average FAR of 11.0% and FRR of 19.0% in identifying KDML105 milled rice from the unwanted four milled rice varieties.

  4. Bow shock data analysis

    NASA Astrophysics Data System (ADS)

    Zipf, Edward C.; Erdman, Peeter W.

    1994-08-01

    The University of Pittsburgh Space Physics Group in collaboration with the Army Research Office (ARO) modeling team has completed a systematic organization of the shock and plume spectral data and the electron temperature and density measurements obtained during the BowShock I and II rocket flights which have been submitted to the AEDC Data Center, has verified the presence of CO Cameron band emission during the Antares engine burn and for an extended period of time in the post-burn plume, and have adapted 3-D radiation entrapment codes developed by the University of Pittsburgh to study aurora and other atmospheric phenomena that involve significant spatial effects to investigate the vacuum ultraviolet (VUV) and extreme ultraviolet (EUV) envelope surrounding the re-entry that create an extensive plasma cloud by photoionization.

  5. SFM-FDTD analysis of triangular-lattice AAA structure: Parametric study of the TEM mode

    NASA Astrophysics Data System (ADS)

    Hamidi, M.; Chemrouk, C.; Belkhir, A.; Kebci, Z.; Ndao, A.; Lamrous, O.; Baida, F. I.

    2014-05-01

    This theoretical work reports a parametric study of enhanced transmission through annular aperture array (AAA) structure arranged in a triangular lattice. The effect of the incidence angle in addition to the inner and outer radii values on the evolution of the transmission spectra is carried out. To this end, a 3D Finite-Difference Time-Domain code based on the Split Field Method (SFM) is used to calculate the spectral response of the structure for any angle of incidence. In order to work through an orthogonal unit cell which presents the advantage to reduce time and space of computation, special periodic boundary conditions are implemented. This study provides a new modeling of AAA structures useful for producing tunable ultra-compact devices.

  6. Quasiparticles and phonon satellites in spectral functions of semiconductors and insulators: Cumulants applied to the full first-principles theory and the Fröhlich polaron

    NASA Astrophysics Data System (ADS)

    Nery, Jean Paul; Allen, Philip B.; Antonius, Gabriel; Reining, Lucia; Miglio, Anna; Gonze, Xavier

    2018-03-01

    The electron-phonon interaction causes thermal and zero-point motion shifts of electron quasiparticle (QP) energies ɛk(T ) . Other consequences of interactions, visible in angle-resolved photoemission spectroscopy (ARPES) experiments, are broadening of QP peaks and appearance of sidebands, contained in the electron spectral function A (k ,ω ) =-ℑ m GR(k ,ω ) /π , where GR is the retarded Green's function. Electronic structure codes (e.g., using density-functional theory) are now available that compute the shifts and start to address broadening and sidebands. Here we consider MgO and LiF, and determine their nonadiabatic Migdal self-energy. The spectral function obtained from the Dyson equation makes errors in the weight and energy of the QP peak and the position and weight of the phonon-induced sidebands. Only one phonon satellite appears, with an unphysically large energy difference (larger than the highest phonon energy) with respect to the QP peak. By contrast, the spectral function from a cumulant treatment of the same self-energy is physically better, giving a quite accurate QP energy and several satellites approximately spaced by the LO phonon energy. In particular, the positions of the QP peak and first satellite agree closely with those found for the Fröhlich Hamiltonian by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317] using diagrammatic Monte Carlo. We provide a detailed comparison between the first-principles MgO and LiF results and those of the Fröhlich Hamiltonian. Such an analysis applies widely to materials with infrared(IR)-active phonons.

  7. Engineering 'Golden' Fluorescence by Selective Pressure Incorporation of Non-canonical Amino Acids and Protein Analysis by Mass Spectrometry and Fluorescence.

    PubMed

    Baumann, Tobias; Schmitt, Franz-Josef; Pelzer, Almut; Spiering, Vivian Jeanette; Freiherr von Sass, Georg Johannes; Friedrich, Thomas; Budisa, Nediljko

    2018-04-27

    Fluorescent proteins are fundamental tools for the life sciences, in particular for fluorescence microscopy of living cells. While wild-type and engineered variants of the green fluorescent protein from Aequorea victoria (avGFP) as well as homologs from other species already cover large parts of the optical spectrum, a spectral gap remains in the near-infrared region, for which avGFP-based fluorophores are not available. Red-shifted fluorescent protein (FP) variants would substantially expand the toolkit for spectral unmixing of multiple molecular species, but the naturally occurring red-shifted FPs derived from corals or sea anemones have lower fluorescence quantum yield and inferior photo-stability compared to the avGFP variants. Further manipulation and possible expansion of the chromophore's conjugated system towards the far-red spectral region is also limited by the repertoire of 20 canonical amino acids prescribed by the genetic code. To overcome these limitations, synthetic biology can achieve further spectral red-shifting via insertion of non-canonical amino acids into the chromophore triad. We describe the application of SPI to engineer avGFP variants with novel spectral properties. Protein expression is performed in a tryptophan-auxotrophic E. coli strain and by supplementing growth media with suitable indole precursors. Inside the cells, these precursors are converted to the corresponding tryptophan analogs and incorporated into proteins by the ribosomal machinery in response to UGG codons. The replacement of Trp-66 in the enhanced "cyan" variant of avGFP (ECFP) by an electron-donating 4-aminotryptophan results in GdFP featuring a 108 nm Stokes shift and a strongly red-shifted emission maximum (574 nm), while being thermodynamically more stable than its predecessor ECFP. Residue-specific incorporation of the non-canonical amino acid is analyzed by mass spectrometry. The spectroscopic properties of GdFP are characterized by time-resolved fluorescence spectroscopy as one of the valuable applications of genetically encoded FPs in life sciences.

  8. Atmospheric Retrievals from Exoplanet Observations and Simulations with BART

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    This project will determine the observing plans needed to retrieve exoplanet atmospheric composition and thermal profiles over a broad range of planets, stars, instruments, and observing modes. Characterizing exoplanets is hard. The dim planets orbit bright stars, giving orders of magnitude more relative noise than for solar-system planets. Advanced statistical techniques are needed to determine what the data can - and more importantly cannot - say. We therefore developed Bayesian Atmospheric Radiative Transfer (BART). BART explores the parameter space of atmospheric chemical abundances and thermal profiles using Differential-Evolution Markov-Chain Monte Carlo. It generates thousands of candidate spectra, integrates over observational bandpasses, and compares to data, generating a statistical model for an atmosphere's composition and thermal structure. At best, it gives abundances and thermal profiles with uncertainties. At worst, it shows what kinds of planets the data allow. It also gives parameter correlations. BART is open-source, designed for community use and extension (http://github.com/exosports/BART). Three arXived PhD theses (papers in publication) provide technical documentation, tests, and application to Spitzer and HST data. There are detailed user and programmer manuals and community support forums. Exoplanet analysis techniques must be tested against synthetic data, where the answer is known, and vetted by statisticians. Unfortunately, this has rarely been done, and never sufficiently. Several recent papers question the entire body of Spitzer exoplanet observations, because different analyses of the same data give different results. The latest method, pixel-level decorrelation, produces results that diverge from an emerging consensus. We do not know the retrieval problem's strengths and weaknesses relative to low SNR, red noise, low resolution, instrument systematics, or incomplete spectral line lists. In observing eclipses and transits, we assume the planet has uniform composition and the same temperature profile everywhere. We do not know this assumption's impact. While Spitzer and HST have few exoplanet observing modes, JWST will have over 20. Given the signal challenges and the complexity of retrieval, modeling the observations and data analysis is the best way to optimize an observing plan. Our project solves all of these problems. Using only open-source codes, with tools available to the community for their immediate application in JWST and HST proposals and analyses, we will produce a faithful simulator of 2D spectral and photometric frames from each JWST exoplanet mode (WFC3 spatial scan mode works already), including jitter and intrapixel effects. We will extract and calibrate data, analyzing them with BART. Given planetary input spectra for terrestrial, super-Earth, Neptune, and Jupiterclass planets, and a variety of stellar spectra, we will determine the best combination of observations to recover each atmosphere, and the limits where low SNR or spectral coverage produce deceptive results. To facilitate these analyses, we will adapt an existing cloud model to BART, add condensate code now being written to its thermochemical model, include scattering, add a 3D atmosphere module (for dayside occultation mapping and the 1D vs. 3D question), and improve performance and documentation, among other improvements. We will host a web site and community discussions online and at conferences about retrieval issues. We will develop validation tests for radiative-transfer and BART-style retrieval codes, and provide examples to validate others' codes. We will engage the retrieval community in data challenges. We will provide web-enabled tools to specify planets easily for modeling. We will make all of these tools, tests, and comparisons available online so everyone can use them to maximize NASA's investment in high-end observing capabilities to characterize exoplanets.

  9. Information theoretical assessment of image gathering and coding for digital restoration

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.

    1990-01-01

    The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.

  10. Coding for stable transmission of W-band radio-over-fiber system using direct-beating of two independent lasers.

    PubMed

    Yang, L G; Sung, J Y; Chow, C W; Yeh, C H; Cheng, K T; Shi, J W; Pan, C L

    2014-10-20

    We demonstrate experimentally Manchester (MC) coding based W-band (75 - 110 GHz) radio-over-fiber (ROF) system to reduce the low-frequency-components (LFCs) signal distortion generated by two independent low-cost lasers using spectral shaping. Hence, a low-cost and higher performance W-band ROF system is achieved. In this system, direct-beating of two independent low-cost CW lasers without frequency tracking circuit (FTC) is used to generate the millimeter-wave. Approaches, such as delayed self-heterodyne interferometer and heterodyne beating are performed to characterize the optical-beating-interference sub-terahertz signal (OBIS). Furthermore, W-band ROF systems using MC coding and NRZ-OOK are compared and discussed.

  11. Fiber-Bragg-Grating-Based Optical Code-Division Multiple Access Passive Optical Network Using Dual-Baseband Modulation Scheme

    NASA Astrophysics Data System (ADS)

    Lin, Wen-Piao; Wu, He-Long

    2005-08-01

    We propose a fiber-Bragg-grating (FBG)-based optical code-division multiple access passive optical network (OCDMA-PON) using a dual-baseband modulation scheme. A mathematical model is developed to study the performance of this scheme. According to the analyzed results, this scheme can allow a tolerance of the spectral power distortion (SPD) ratio of 25% with a bit error rate (BER) of 10-9 when the modified pseudorandom noise (PN) code length is 16. Moreover, we set up a simulated system to evaluate the baseband and radio frequency (RF) band transmission characteristics. The simulation results demonstrate that our proposed OCDMA-PON can provide a cost-effective and scalable fiber-to-the-home solution.

  12. S-NPP CrIS Full Resolution Sensor Data Record Processing and Evaluations

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Han, Y.; Wang, L.; Tremblay, D. A.; Jin, X.; Weng, F.

    2014-12-01

    The Cross-track Infrared Sounder (CrIS) on Suomi National Polar-orbiting Partnership Satellite (S-NPP) is a Fourier transform spectrometer. It provides a total of 1305 channels in the normal mode for sounding the atmosphere. CrIS can also be operated in the full spectral resolution (FSR) mode, in which the MWIR and SWIR band interferograms are recorded with the same maximum path difference as the LWIR band and with spectral resolution of 0.625 cm-1 for all three bands (total 2211 channels). NOAA will operate CrIS in FSR mode in December 2014 and the Joint Polar Satellite System (JPSS). Up to date, the FSR mode has been commanded three times in-orbit (02/23/2012, 03/12/2013, and 08/27/2013). Based on CrIS Algorithm Development Library (ADL), CrIS full resolution Processing System (CRPS) has developed to generate the FSR Sensor Data Record (SDR). This code can also be run for normal mode and truncation mode SDRs with recompiling. Different calibration approaches are implemented in the code in order to study the ringing effect observed in CrIS normal mode SDR and to support to select the best calibration algorithm for J1. We develop the CrIS FSR SDR Validation System to quantify the CrIS radiometric and spectral accuracy, since they are crucial for improving its data assimilation in the numerical weather prediction, and for retrieving atmospheric trace gases. In this study, CrIS full resolution SDRs are generated from CRPS using the data collected from FSR mode of S-NPP, and the radiometric and spectral accuracy are assessed by using the Community Radiative Transfer Model (CRTM) and European Centre for Medium-Range Weather Forecasts (ECMWF) forecast fields. The biases between observation and simulations are evaluated to estimate the FOV-2-FOV variability and bias under clear sky over ocean. Double difference method and Simultaneous Nadir Overpass (SNO) method are also used to assess the CrIS radiance consistency with well-validated IASI. Two basic frequency validation methods (absolute and relative spectral validations) are used to assess the CrIS spectral accuracy. Results show that CrIS SDRs from FSR have similar radiometric and spectral accuracy as those from normal mode.

  13. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  14. Inflight calibration of the modular airborne imaging spectrometer (MAIS) and its application to reflectance retrieval

    NASA Astrophysics Data System (ADS)

    Min, Xiangjun; Zhu, Yonghao

    1998-08-01

    Inflight experiment of Modular Airborne Imaging Spectrometer (MAIS) and ground-based measurements using GER MARK-V spectroradiometer simultaneously with the MAIS overpass were performed during Autumn 1995 at the semiarid area of Inner Mongolia, China. Based on these measurements and MAIS image data, we designed a method for the radiometric calibration of MAIS sensor using 6S and LOWTRAN 7 codes. The results show that the uncertainty of MAIS calibration is about 8% in the visible and near infrared wavelengths (0.4 - 1.2 micrometer). To verify our calibration algorithm, the calibrated results of MAIS sensor was used to derive the ground reflectances. The accuracy of reflectance retrieval is about 8.5% in the spectral range of 0.4 to 1.2 micrometer, i.e., the uncertainty of derived near-nadir reflectances is within 0.01 - 0.05 in reflectance unit at ground reflectance between 3% and 50%. The distinguishing feature of the ground-based measurements, which will be paid special attention in this paper, is that obtaining simultaneously the reflectance factors of the calibration target, atmospheric optical depth, and water vapor abundance from the same one set of measurement data by only one suit of instruments. The analysis indicates that the method presented here is suitable to the quantitative analysis of imaging spectral data in China.

  15. High-spatial resolution and high-spectral resolution detector for use in the measurement of solar flare hard X-rays

    NASA Technical Reports Server (NTRS)

    Desai, U. D.; Orwig, Larry E.

    1988-01-01

    In the areas of high spatial resolution, the evaluation of a hard X-ray detector with 65 micron spatial resolution for operation in the energy range from 30 to 400 keV is proposed. The basic detector is a thick large-area scintillator faceplate, composed of a matrix of high-density scintillating glass fibers, attached to a proximity type image intensifier tube with a resistive-anode digital readout system. Such a detector, combined with a coded-aperture mask, would be ideal for use as a modest-sized hard X-ray imaging instrument up to X-ray energies as high as several hundred keV. As an integral part of this study it was also proposed that several techniques be critically evaluated for X-ray image coding which could be used with this detector. In the area of high spectral resolution, it is proposed to evaluate two different types of detectors for use as X-ray spectrometers for solar flares: planar silicon detectors and high-purity germanium detectors (HPGe). Instruments utilizing these high-spatial-resolution detectors for hard X-ray imaging measurements from 30 to 400 keV and high-spectral-resolution detectors for measurements over a similar energy range would be ideally suited for making crucial solar flare observations during the upcoming maximum in the solar cycle.

  16. Evaluation of potential emission spectra for the reliable classification of fluorescently coded materials

    NASA Astrophysics Data System (ADS)

    Brunner, Siegfried; Kargel, Christian

    2011-06-01

    The conservation and efficient use of natural and especially strategic resources like oil and water have become global issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry, automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required. One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints") incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number of fluorescent markers that can be reliably distinguished within the limited wavelength band available. In this article we shall investigate the reliable detection and classification of fluorescent markers with specific fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall measurement system.

  17. Emission spectra of photoionized plasmas induced by intense EUV pulses: Experimental and theoretical investigations

    NASA Astrophysics Data System (ADS)

    Saber, Ismail; Bartnik, Andrzej; Skrzeczanowski, Wojciech; Wachulak, Przemysław; Jarocki, Roman; Fiedorowicz, Henryk

    2017-03-01

    Experimental measurements and numerical modeling of emission spectra in photoionized plasma in the ultraviolet and visible light (UV/Vis) range for noble gases have been investigated. The photoionized plasmas were created using laser-produced plasma (LPP) extreme ultraviolet (EUV) source. The source was based on a gas puff target; irradiated with 10ns/10J/10Hz Nd:YAG laser. The EUV radiation pulses were collected and focused using grazing incidence multifoil EUV collector. The laser pulses were focused on a gas stream, injected into a vacuum chamber synchronously with the EUV pulses. Irradiation of gases resulted in a formation of low temperature photoionized plasmas emitting radiation in the UV/Vis spectral range. Atomic photoionized plasmas produced this way consisted of atomic and ionic with various ionization states. The most dominated observed spectral lines originated from radiative transitions in singly charged ions. To assist in a theoretical interpretation of the measured spectra, an atomic code based on Cowan's programs and a collisional-radiative PrismSPECT code have been used to calculate the theoretical spectra. A comparison of the calculated spectral lines with experimentally obtained results is presented. Electron temperature in plasma is estimated using the Boltzmann plot method, by an assumption that a local thermodynamic equilibrium (LTE) condition in the plasma is validated in the first few ionization states. A brief discussion for the measured and computed spectra is given.

  18. Effects of spectral and temporal disruption on cortical encoding of gerbil vocalizations

    PubMed Central

    Ter-Mikaelian, Maria; Semple, Malcolm N.

    2013-01-01

    Animal communication sounds contain spectrotemporal fluctuations that provide powerful cues for detection and discrimination. Human perception of speech is influenced both by spectral and temporal acoustic features but is most critically dependent on envelope information. To investigate the neural coding principles underlying the perception of communication sounds, we explored the effect of disrupting the spectral or temporal content of five different gerbil call types on neural responses in the awake gerbil's primary auditory cortex (AI). The vocalizations were impoverished spectrally by reduction to 4 or 16 channels of band-passed noise. For this acoustic manipulation, an average firing rate of the neuron did not carry sufficient information to distinguish between call types. In contrast, the discharge patterns of individual AI neurons reliably categorized vocalizations composed of only four spectral bands with the appropriate natural token. The pooled responses of small populations of AI cells classified spectrally disrupted and natural calls with an accuracy that paralleled human performance on an analogous speech task. To assess whether discharge pattern was robust to temporal perturbations of an individual call, vocalizations were disrupted by time-reversing segments of variable duration. For this acoustic manipulation, cortical neurons were relatively insensitive to short reversal lengths. Consistent with human perception of speech, these results indicate that the stable representation of communication sounds in AI is more dependent on sensitivity to slow temporal envelopes than on spectral detail. PMID:23761696

  19. Predictions of Supersonic Jet Mixing and Shock-Associated Noise Compared With Measured Far-Field Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2010-01-01

    Codes for predicting supersonic jet mixing and broadband shock-associated noise were assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. Two types of codes were used to make predictions. Fast running codes containing empirical models were used to compute both the mixing noise component and the shock-associated noise component of the jet noise spectrum. One Reynolds-averaged, Navier-Stokes-based code was used to compute only the shock-associated noise. To enable the comparisons of the predicted component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise components. Comparisons were made for 1/3-octave spectra and some power spectral densities using data from jets operating at 24 conditions covering essentially 6 fully expanded Mach numbers with 4 total temperature ratios.

  20. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  1. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  2. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    PubMed

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  3. Golay sequences coded coherent optical OFDM for long-haul transmission

    NASA Astrophysics Data System (ADS)

    Qin, Cui; Ma, Xiangrong; Hua, Tao; Zhao, Jing; Yu, Huilong; Zhang, Jian

    2017-09-01

    We propose to use binary Golay sequences in coherent optical orthogonal frequency division multiplexing (CO-OFDM) to improve the long-haul transmission performance. The Golay sequences are generated by binary Reed-Muller codes, which have low peak-to-average power ratio and certain error correction capability. A low-complexity decoding algorithm for the Golay sequences is then proposed to recover the signal. Under same spectral efficiency, the QPSK modulated OFDM with binary Golay sequences coding with and without discrete Fourier transform (DFT) spreading (DFTS-QPSK-GOFDM and QPSK-GOFDM) are compared with the normal BPSK modulated OFDM with and without DFT spreading (DFTS-BPSK-OFDM and BPSK-OFDM) after long-haul transmission. At a 7% forward error correction code threshold (Q2 factor of 8.5 dB), it is shown that DFTS-QPSK-GOFDM outperforms DFTS-BPSK-OFDM by extending the transmission distance by 29% and 18%, in non-dispersion managed and dispersion managed links, respectively.

  4. Spectral and Photometric Data of Be Star, EM Cep

    NASA Astrophysics Data System (ADS)

    Kochiashvili, Nino; Natsvilishvili, Rezo; Kochiashvili, Ia; Vardosanidze, Manana; Beradze, Sopia; Pannicke, Anna

    The subject of investigation in this project is a Be spectral type giant variable star EM Cep. It was established that the star has a double nature: 1. when emission lines are seen in its spectrum and 2. when only absorption lines are observable and emission lines are not seen. This means that the star is not always in Be state. Be state continues existing during a few months. EM Cep shows flare activity too. The causes of photometric and spectral variability are to be established. The existence of different mechanisms, which provokes Be phenomenon, is possible. The character of light curves' variability gives us possibility to propose that it is not excluded that the star could be a short-period Cepheid of λ Eri type. However, we do not have sufficient data to exclude its binarity. On the basis of the observations carried out at Abastumani observatory, the light curve with two minima and two maxima were revealed, but these data, too accord with the half-period - we can also consider a light curve with one minimum and one maximum. Both cases suggest a good agreement with the characters of variability. For the case of binarity in Abastumani observatory, a set of orbital elements by using the Wilson-Devinney code is already obtained. The elements correspond to the model of acceptable, real close binary star. However, notwithstanding this situation, the true nature of the star is not established for the moment. To solve this problem, we need to get high-resolution spectral data, when by using radial velocity curves, it would be possible to answer the question of binarity of the star. It is not excluded to reveal spectral lines of the second component in case of binarity of the star. Since 2014, we have renewed UBVRI photometric observations of EM Cep in Abastumani using a 48-cm telescope with CCD device. Spectral observations are made in Azerbaijan, Shamakhy Observatory. Our German Colleagues have been observing the star since March of 2017 at the Observatory of the Jena University. We plan to carry out a joint analysis of the observations of the three observatories to explain the observational peculiarities of the star.

  5. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  6. Atmospheric Sounder Spectrometer for Infrared Spectral Technology (ASSIST) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, Connor J.

    The Atmospheric Sounder Spectrometer for Infrared Spectral Technology (ASSIST) measures the absolute infrared (IR) spectral radiance (watts per square meter per steradian per wavenumber) of the sky directly above the instrument. More information about the instrument can be found through the manufacturer’s website. The spectral measurement range of the instrument is 3300 to 520 wavenumbers (cm -1) or 3-19.2 microns for the normal-range instruments and 3300 to 400 cm -1 or 3-25 microns, for the extended-range polar instruments. Spectral resolution is 1.0 cm -1. Instrument field-of-view is 1.3 degrees. Calibrated sky radiance spectra are produced on cycle of about 141more » seconds with a group of 6 radiance spectra zenith having dwell times of about 14 seconds each interspersed with 55 seconds of calibration and mirror motion. The ASSIST data is comparable to the Atmospheric Emitted Radiance Interferometer (AERI) data and can be used for 1) evaluating line-by-line radiative transport codes, 2) detecting/quantifying cloud effects on ground-based measurements of infrared spectral radiance (and hence is valuable for cloud property retrievals), and 3) calculating vertical atmospheric profiles of temperature and water vapor and the detection of trace gases.« less

  7. Tuning the spectral emittance of α-SiC open-cell foams up to 1300 K with their macro porosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rousseau, B., E-mail: benoit.rousseau@univ-nantes.fr; Guevelou, S.; Mekeze-Monthe, A.

    2016-06-15

    A simple and robust analytical model is used to finely predict the spectral emittance under air up to 1300 K of α-SiC open-cell foams constituted of optically thick struts. The model integrates both the chemical composition and the macro-porosity and is valid only if foams have volumes higher than their Representative Elementary Volumes required for determining their emittance. Infrared emission spectroscopy carried out on a doped silicon carbide single crystal associated to homemade numerical tools based on 3D meshed images (Monte Carlo Ray Tracing code, foam generator) make possible to understand the exact role of the cell network in emittance.more » Finally, one can tune the spectral emittance of α-SiC foams up to 1300 K by simply changing their porosity.« less

  8. SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit

    NASA Astrophysics Data System (ADS)

    Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory

    2018-05-01

    SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.

  9. Spectral simulations of an axisymmetric force-free pulsar magnetosphere

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhang, Li; Sun, Sineng

    2016-02-01

    A pseudo-spectral method with an absorbing outer boundary is used to solve a set of time-dependent force-free equations. In this method, both electric and magnetic fields are expanded in terms of the vector spherical harmonic (VSH) functions in spherical geometry and the divergence-free state of the magnetic field is enforced analytically by a projection method. Our simulations show that the Deutsch vacuum solution and the Michel monopole solution can be reproduced well by our pseudo-spectral code. Further, the method is used to present a time-dependent simulation of the force-free pulsar magnetosphere for an aligned rotator. The simulations show that the current sheet in the equatorial plane can be resolved well and the spin-down luminosity obtained in the steady state is in good agreement with the value given by Spitkovsky.

  10. A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.

    2010-12-01

    A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.

  11. MODTRAN3: Suitability as a flux-divergence code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, G.P.; Chetwynd, J.H.; Wang, J.

    1995-04-01

    The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less

  12. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  13. Possibility of successive SRXFA use along with chemical-spectral methods for palladium analysis in geological samples

    NASA Astrophysics Data System (ADS)

    Kislov, E. V.; Kulikov, A. A.; Kulikova, A. B.

    1989-10-01

    Samples of basit-ultrabasit rocks and NiCu ores of the Ioko-Dovyren and Chaya massifs were analysed by SRXFA and a chemical-spectral method. SRXFA perfectly satisfies the quantitative noble-metals analysis of ore-free rocks. Combination of SRXFA and chemical-spectral analysis has good prospects. After analysis of a great number of samples by SRXFA it is necessary to select samples which would show minimal and maximal results for the chemical-spectral method.

  14. Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacon, L.; Cappello, S.

    2008-11-01

    Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.

  15. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  16. Spectral Properties and Dynamics of Gold Nanorods Revealed by EMCCD Based Spectral-Phasor Method

    PubMed Central

    Chen, Hongtao; Digman, Michelle A.

    2015-01-01

    Gold nanorods (NRs) with tunable plasmon-resonant absorption in the near-infrared region have considerable advantages over organic fluorophores as imaging agents. However, the luminescence spectral properties of NRs have not been fully explored at the single particle level in bulk due to lack of proper analytic tools. Here we present a global spectral phasor analysis method which allows investigations of NRs' spectra at single particle level with their statistic behavior and spatial information during imaging. The wide phasor distribution obtained by the spectral phasor analysis indicates spectra of NRs are different from particle to particle. NRs with different spectra can be identified graphically in corresponding spatial images with high spectral resolution. Furthermore, spectral behaviors of NRs under different imaging conditions, e.g. different excitation powers and wavelengths, were carefully examined by our laser-scanning multiphoton microscope with spectral imaging capability. Our results prove that the spectral phasor method is an easy and efficient tool in hyper-spectral imaging analysis to unravel subtle changes of the emission spectrum. Moreover, we applied this method to study the spectral dynamics of NRs during direct optical trapping and by optothermal trapping. Interestingly, spectral shifts were observed in both trapping phenomena. PMID:25684346

  17. icoshift: A versatile tool for the rapid alignment of 1D NMR spectra

    NASA Astrophysics Data System (ADS)

    Savorani, F.; Tomasi, G.; Engelsen, S. B.

    2010-02-01

    The increasing scientific and industrial interest towards metabonomics takes advantage from the high qualitative and quantitative information level of nuclear magnetic resonance (NMR) spectroscopy. However, several chemical and physical factors can affect the absolute and the relative position of an NMR signal and it is not always possible or desirable to eliminate these effects a priori. To remove misalignment of NMR signals a posteriori, several algorithms have been proposed in the literature. The icoshift program presented here is an open source and highly efficient program designed for solving signal alignment problems in metabonomic NMR data analysis. The icoshift algorithm is based on correlation shifting of spectral intervals and employs an FFT engine that aligns all spectra simultaneously. The algorithm is demonstrated to be faster than similar methods found in the literature making full-resolution alignment of large datasets feasible and thus avoiding down-sampling steps such as binning. The algorithm uses missing values as a filling alternative in order to avoid spectral artifacts at the segment boundaries. The algorithm is made open source and the Matlab code including documentation can be downloaded from www.models.life.ku.dk.

  18. Human phoneme recognition depending on speech-intrinsic variability.

    PubMed

    Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger

    2010-11-01

    The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).

  19. Integrated Idl Tool For 3d Modeling And Imaging Data Analysis

    NASA Astrophysics Data System (ADS)

    Nita, Gelu M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A. A.; Kontar, E. P.

    2012-05-01

    Addressing many key problems in solar physics requires detailed analysis of non-simultaneous imaging data obtained in various wavelength domains with different spatial resolution and their comparison with each other supplied by advanced 3D physical models. To facilitate achieving this goal, we have undertaken a major enhancement and improvements of IDL-based simulation tools developed earlier for modeling microwave and X-ray emission. The greatly enhanced object-based architecture provides interactive graphic user interface that allows the user i) to import photospheric magnetic field maps and perform magnetic field extrapolations to almost instantly generate 3D magnetic field models, ii) to investigate the magnetic topology of these models by interactively creating magnetic field lines and associated magnetic field tubes, iii) to populate them with user-defined nonuniform thermal plasma and anisotropic nonuniform nonthermal electron distributions; and iv) to calculate the spatial and spectral properties of radio and X-ray emission. The application integrates DLL and Shared Libraries containing fast gyrosynchrotron emission codes developed in FORTRAN and C++, soft and hard X-ray codes developed in IDL, and a potential field extrapolation DLL produced based on original FORTRAN code developed by V. Abramenko and V. Yurchishin. The interactive interface allows users to add any user-defined IDL or external callable radiation code, as well as user-defined magnetic field extrapolation routines. To illustrate the tool capabilities, we present a step-by-step live computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data produced by NORH and RHESSI instruments. This work was supported in part by NSF grants AGS-0961867, AST-0908344, AGS-0969761, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, the Leverhulme Trust, UK, and by the European Commission through the Radiosun and HESPE Networks.

  20. Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution †

    PubMed Central

    Bouridane, Ahmed; Ling, Bingo Wing-Kuen

    2018-01-01

    This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629

  1. Land use and land cover classification for rural residential areas in China using soft-probability cascading of multifeatures

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin

    2017-10-01

    A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.

  2. Quantum internet using code division multiple access

    PubMed Central

    Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco

    2013-01-01

    A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488

  3. NEBULAR: Spectrum synthesis for mixed hydrogen-helium gas in ionization equilibrium

    NASA Astrophysics Data System (ADS)

    Schirmer, Mischa

    2016-08-01

    NEBULAR synthesizes the spectrum of a mixed hydrogen helium gas in collisional ionization equilibrium. It is not a spectral fitting code, but it can be used to resample a model spectrum onto the wavelength grid of a real observation. It supports a wide range of temperatures and densities. NEBULAR includes free-free, free-bound, two-photon and line emission from HI, HeI and HeII. The code will either return the composite model spectrum, or, if desired, the unrescaled atomic emission coefficients. It is written in C++ and depends on the GNU Scientific Library (GSL).

  4. Intercode comparison of gyrokinetic global electromagnetic modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Görler, T., E-mail: tobias.goerler@ipp.mpg.de; Tronko, N.; Hornsby, W. A.

    Aiming to fill a corresponding lack of sophisticated test cases for global electromagnetic gyrokinetic codes, a new hierarchical benchmark is proposed. Starting from established test sets with adiabatic electrons, fully gyrokinetic electrons, and electrostatic fluctuations are taken into account before finally studying the global electromagnetic micro-instabilities. Results from up to five codes involving representatives from different numerical approaches as particle-in-cell methods, Eulerian and Semi-Lagrangian are shown. By means of spectrally resolved growth rates and frequencies and mode structure comparisons, agreement can be confirmed on ion-gyro-radius scales, thus providing confidence in the correct implementation of the underlying equations.

  5. Science & Technology Review September 2005

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aufderheide III, M B

    2005-07-19

    This month's issue has the following articles: (1) The Pursuit of Fusion Energy--Commentary by William H. Goldstein; (2) A Dynamo of a Plasma--The self-organizing magnetized plasmas in a Livermore fusion energy experiment are akin to solar flares and galactic jets; (3) How One Equation Changed the World--A three-page paper by Albert Einstein revolutionized physics by linking mass and energy; (4) Recycled Equations Help Verify Livermore Codes--New analytic solutions for imploding spherical shells give scientists additional tools for verifying codes; and (5) Dust That.s Worth Keeping--Scientists have solved the mystery of an astronomical spectral feature in interplanetary dust particles.

  6. Computation of Engine Noise Propagation and Scattering Off an Aircraft

    NASA Technical Reports Server (NTRS)

    Xu, J.; Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a comparison of experimental noise data measured in flight on a two-engine business jet aircraft with Kulite microphones placed on the suction surface of the wing with computational results. Both a time-domain discontinuous Galerkin spectral method and a frequency-domain spectral element method are used to simulate the radiation of the dominant spinning mode from the engine and its reflection and scattering by the fuselage and the wing. Both methods are implemented in computer codes that use the distributed memory model to make use of large parallel architectures. The results show that trends of the noise field are well predicted by both methods.

  7. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  8. Relativistic quantum mechanical calculations of electron-impact broadening for spectral lines in Be-like ions

    NASA Astrophysics Data System (ADS)

    Duan, B.; Bari, M. A.; Wu, Z. Q.; Jun, Y.; Li, Y. M.; Wang, J. G.

    2012-11-01

    Aims: We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from N IV to Ne VII. Methods: In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results: A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.

  9. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  10. [The study of M dwarf spectral classification].

    PubMed

    Yi, Zhen-Ping; Pan, Jing-Chang; Luo, A-Li

    2013-08-01

    As the most common stars in the galaxy, M dwarfs can be used to trace the structure and evolution of the Milky Way. Besides, investigating M dwarfs is important for searching for habitability of extrasolar planets orbiting M dwarfs. Spectral classification of M dwarfs is a fundamental work. The authors used DR7 M dwarf sample of SLOAN to extract important features from the range of 600-900 nm by random forest method. Compared to the features used in Hammer Code, the authors added three new indices. Our test showed that the improved Hammer with new indices is more accurate. Our method has been applied to classify M dwarf spectra of LAMOST.

  11. INTRIGOSS: A new Library of High Resolution Synthetic Spectra

    NASA Astrophysics Data System (ADS)

    Franchini, Mariagrazia; Morossi, Carlo; Di Marcancantonio, Paolo; Chavez, Miguel; GES-Builders

    2018-01-01

    INTRIGOSS (INaf Trieste Grid Of Synthetic Spectra) is a new High Resolution (HiRes) synthetic spectral library designed for studying F, G, and K stars. The library is based on atmosphere models computed with specified individual element abundances via ATLAS12 code. Normalized SPectra (NSP) and surface Flux SPectra (FSP), in the 4800-5400 Å wavelength range, were computed by means of the SPECTRUM code. The synthetic spectra are computed with an atomic and bi-atomic molecular line list including "bona fide" Predicted Lines (PLs) built by tuning loggf to reproduce very high SNR Solar spectrum and the UVES-U580 spectra of five cool giants extracted from the Gaia-ESO survey (GES). The astrophysical gf-values were then assessed by using more than 2000 stars with homogenous and accurate atmosphere parameters and detailed chemical composition from GES. The validity and greater accuracy of INTRIGOSS NSPs and FSPs with respect to other available spectral libraries is discussed. INTRIGOSS will be available on the web and will be a valuable tool for both stellar atmospheric parameters and stellar population studies.

  12. Numerical modeling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.

    2002-11-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to results obtained from the experiment. The code, Dynamo (Fortran90), allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the curl of the momentum equation governing V are separately or simultaneously solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependent kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Power balance in the system has been verified in both mechanically driven and perturbed hydrodynamic, kinematic, and dynamic cases. Evolution of the vacuum magnetic field has been added to facilitate comparison with the experiment. Modeling of the Madison Dynamo eXperiment will be presented.

  13. CHARRON: Code for High Angular Resolution of Rotating Objects in Nature

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Zorec, J.; Vakili, F.

    2012-12-01

    Rotation is one of the fundamental physical parameters governing stellar physics and evolution. At the same time, spectrally resolved optical/IR long-baseline interferometry has proven to be an important observing tool to measure many physical effects linked to rotation, in particular, stellar flattening, gravity darkening, differential rotation. In order to interpret the high angular resolution observations from modern spectro-interferometers, such as VLTI/AMBER and VEGA/CHARA, we have developed an interferometry-oriented numerical model: CHARRON (Code for High Angular Resolution of Rotating Objects in Nature). We present here the characteristics of CHARRON, which is faster (≃q10-30 s per model) and thus more adapted to model-fitting than the first version of the code presented by Domiciano de Souza et al. (2002).

  14. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  15. A Steady-State Kalman Predictor-Based Filtering Strategy for Non-Overlapping Sub-Band Spectral Estimation

    PubMed Central

    Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe

    2015-01-01

    This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038

  16. Method of multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2004-01-06

    A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).

  17. An Analysis of Water Line Profiles in Star Formation Regions Observed by SWAS

    NASA Technical Reports Server (NTRS)

    Ashby, Matthew L. N.; Bergin, Edwin A.; Plume, Rene; Carpenter, John M.; Neufeld, David A.; Chin, Gordon; Erickson, Neal R.; Goldsmith, Paul F.; Harwit, Martin; Howe, J. E.

    2000-01-01

    We present spectral line profiles for the 557 GHz 1(sub 1,0) yields 1(sub 0,1) ground-state rotational transition of ortho-H2(16)O for 18 galactic star formation regions observed by SWAS. 2 Water is unambiguously detected in every source. The line profiles exhibit a wide variety of shapes, including single-peaked spectra and self-reversed profiles. We interpret these profiles using a Monte Carlo code to model the radiative transport. The observed variations in the line profiles can be explained by variations in the relative strengths of the bulk flow and small-scale turbulent motions within the clouds. Bulk flow (infall, outflow) must be present in some cloud cores, and in certain cases this bulk flow dominates the turbulent motions.

  18. Analysis of the Tanana River Basin using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Morrissey, L. A.; Ambrosia, V. G.; Carson-Henry, C.

    1981-01-01

    Digital image classification techniques were used to classify land cover/resource information in the Tanana River Basin of Alaska. Portions of four scenes of LANDSAT digital data were analyzed using computer systems at Ames Research Center in an unsupervised approach to derive cluster statistics. The spectral classes were identified using the IDIMS display and color infrared photography. Classification errors were corrected using stratification procedures. The classification scheme resulted in the following eleven categories; sedimented/shallow water, clear/deep water, coniferous forest, mixed forest, deciduous forest, shrub and grass, bog, alpine tundra, barrens, snow and ice, and cultural features. Color coded maps and acreage summaries of the major land cover categories were generated for selected USGS quadrangles (1:250,000) which lie within the drainage basin. The project was completed within six months.

  19. Atmospheric radiation model for water surfaces

    NASA Technical Reports Server (NTRS)

    Turner, R. E.; Gaskill, D. W.; Lierzer, J. R.

    1982-01-01

    An atmospheric correction model was extended to account for various atmospheric radiation components in remotely sensed data. Components such as the atmospheric path radiance which results from singly scattered sky radiation specularly reflected by the water surface are considered. A component which is referred to as the virtual Sun path radiance, i.e. the singly scattered path radiance which results from the solar radiation which is specularly reflected by the water surface is also considered. These atmospheric radiation components are coded into a computer program for the analysis of multispectral remote sensor data over the Great Lakes of the United States. The user must know certain parameters, such as the visibility or spectral optical thickness of the atmosphere and the geometry of the sensor with respect to the Sun and the target elements under investigation.

  20. SU-F-I-53: Coded Aperture Coherent Scatter Spectral Imaging of the Breast: A Monte Carlo Evaluation of Absorbed Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Lakshmanan, M; Fong, G

    Purpose: Coherent scatter based imaging has shown improved contrast and molecular specificity over conventional digital mammography however the biological risks have not been quantified due to a lack of accurate information on absorbed dose. This study intends to characterize the dose distribution and average glandular dose from coded aperture coherent scatter spectral imaging of the breast. The dose deposited in the breast from this new diagnostic imaging modality has not yet been quantitatively evaluated. Here, various digitized anthropomorphic phantoms are tested in a Monte Carlo simulation to evaluate the absorbed dose distribution and average glandular dose using clinically feasible scanmore » protocols. Methods: Geant4 Monte Carlo radiation transport simulation software is used to replicate the coded aperture coherent scatter spectral imaging system. Energy sensitive, photon counting detectors are used to characterize the x-ray beam spectra for various imaging protocols. This input spectra is cross-validated with the results from XSPECT, a commercially available application that yields x-ray tube specific spectra for the operating parameters employed. XSPECT is also used to determine the appropriate number of photons emitted per mAs of tube current at a given kVp tube potential. With the implementation of the XCAT digital anthropomorphic breast phantom library, a variety of breast sizes with differing anatomical structure are evaluated. Simulations were performed with and without compression of the breast for dose comparison. Results: Through the Monte Carlo evaluation of a diverse population of breast types imaged under real-world scan conditions, a clinically relevant average glandular dose for this new imaging modality is extrapolated. Conclusion: With access to the physical coherent scatter imaging system used in the simulation, the results of this Monte Carlo study may be used to directly influence the future development of the modality to keep breast dose to a minimum while still maintaining clinically viable image quality.« less

Top