Sample records for algorithm permitting spectroscopic

  1. Diamond anvil cell for spectroscopic investigation of materials at high temperature, high pressure and shear

    DOEpatents

    Westerfield, Curtis L.; Morris, John S.; Agnew, Stephen F.

    1997-01-01

    Diamond anvil cell for spectroscopic investigation of materials at high temperature, high pressure and shear. A cell is described which, in combination with Fourier transform IR spectroscopy, permits the spectroscopic investigation of boundary layers under conditions of high temperature, high pressure and shear.

  2. Diamond anvil cell for spectroscopic investigation of materials at high temperature, high pressure and shear

    DOEpatents

    Westerfield, C.L.; Morris, J.S.; Agnew, S.F.

    1997-01-14

    Diamond anvil cell is described for spectroscopic investigation of materials at high temperature, high pressure and shear. A cell is described which, in combination with Fourier transform IR spectroscopy, permits the spectroscopic investigation of boundary layers under conditions of high temperature, high pressure and shear. 4 figs.

  3. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  4. Spectroscopic analysis and control

    DOEpatents

    Tate; , James D.; Reed, Christopher J.; Domke, Christopher H.; Le, Linh; Seasholtz, Mary Beth; Weber, Andy; Lipp, Charles

    2017-04-18

    Apparatus for spectroscopic analysis which includes a tunable diode laser spectrometer having a digital output signal and a digital computer for receiving the digital output signal from the spectrometer, the digital computer programmed to process the digital output signal using a multivariate regression algorithm. In addition, a spectroscopic method of analysis using such apparatus. Finally, a method for controlling an ethylene cracker hydrogenator.

  5. Infrared Spectroscopy of Explosives Residues: Measurement Techniques and Spectral Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Mark C.; Bernacki, Bruce E.

    2015-03-11

    Infrared laser spectroscopy of explosives is a promising technique for standoff and non-contact detection applications. However, the interpretation of spectra obtained in typical standoff measurement configurations presents numerous challenges. Understanding the variability in observed spectra from explosives residues and particles is crucial for design and implementation of detection algorithms with high detection confidence and low false alarm probability. We discuss a series of infrared spectroscopic techniques applied toward measuring and interpreting the reflectance spectra obtained from explosives particles and residues. These techniques utilize the high spectral radiance, broad tuning range, rapid wavelength tuning, high scan reproducibility, and low noise ofmore » an external cavity quantum cascade laser (ECQCL) system developed at Pacific Northwest National Laboratory. The ECQCL source permits measurements in configurations which would be either impractical or overly time-consuming with broadband, incoherent infrared sources, and enables a combination of rapid measurement speed and high detection sensitivity. The spectroscopic methods employed include standoff hyperspectral reflectance imaging, quantitative measurements of diffuse reflectance spectra, reflection-absorption infrared spectroscopy, microscopic imaging and spectroscopy, and nano-scale imaging and spectroscopy. Measurements of explosives particles and residues reveal important factors affecting observed reflectance spectra, including measurement geometry, substrate on which the explosives are deposited, and morphological effects such as particle shape, size, orientation, and crystal structure.« less

  6. A Fully Customized Baseline Removal Framework for Spectroscopic Applications.

    PubMed

    Giguere, Stephen; Boucher, Thomas; Carey, C J; Mahadevan, Sridhar; Dyar, M Darby

    2017-07-01

    The task of proper baseline or continuum removal is common to nearly all types of spectroscopy. Its goal is to remove any portion of a signal that is irrelevant to features of interest while preserving any predictive information. Despite the importance of baseline removal, median or guessed default parameters are commonly employed, often using commercially available software supplied with instruments. Several published baseline removal algorithms have been shown to be useful for particular spectroscopic applications but their generalizability is ambiguous. The new Custom Baseline Removal (Custom BLR) method presented here generalizes the problem of baseline removal by combining operations from previously proposed methods to synthesize new correction algorithms. It creates novel methods for each technique, application, and training set, discovering new algorithms that maximize the predictive accuracy of the resulting spectroscopic models. In most cases, these learned methods either match or improve on the performance of the best alternative. Examples of these advantages are shown for three different scenarios: quantification of components in near-infrared spectra of corn and laser-induced breakdown spectroscopy data of rocks, and classification/matching of minerals using Raman spectroscopy. Software to implement this optimization is available from the authors. By removing subjectivity from this commonly encountered task, Custom BLR is a significant step toward completely automatic and general baseline removal in spectroscopic and other applications.

  7. SDSS-III Baryon Oscillation Spectroscopic Survey data release 12: Galaxy target selection and large-scale structure catalogues

    DOE PAGES

    Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; ...

    2015-11-17

    The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets formore » which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.« less

  8. SPIDERS: the spectroscopic follow-up of X-ray-selected clusters of galaxies in SDSS-IV

    DOE PAGES

    Clerc, N.; Merloni, A.; Zhang, Y. -Y.; ...

    2016-09-05

    SPIDERS (The SPectroscopic IDentification of ERosita Sources) is a programme dedicated to the homogeneous and complete spectroscopic follow-up of X-ray active galactic nuclei and galaxy clusters over a large area (~7500 deg 2) of the extragalactic sky. SPIDERS is part of the Sloan Digital Sky Survey (SDSS)-IV project, together with the Extended Baryon Oscillation Spectroscopic Survey and the Time-Domain Spectroscopic Survey. This study describes the largest project within SPIDERS before the launch of eROSITA: an optical spectroscopic survey of X-ray-selected, massive (~10 14–10 15 M⊙) galaxy clusters discovered in ROSAT and XMM–Newton imaging. The immediate aim is to determine precisemore » (Δz ~ 0.001) redshifts for 4000–5000 of these systems out to z ~ 0.6. The scientific goal of the program is precision cosmology, using clusters as probes of large-scale structure in the expanding Universe. We present the cluster samples, target selection algorithms and observation strategies. We demonstrate the efficiency of selecting targets using a combination of SDSS imaging data, a robust red-sequence finder and a dedicated prioritization scheme. We describe a set of algorithms and work-flow developed to collate spectra and assign cluster membership, and to deliver catalogues of spectroscopically confirmed clusters. We discuss the relevance of line-of-sight velocity dispersion estimators for the richer systems. We illustrate our techniques by constructing a catalogue of 230 spectroscopically validated clusters (0.031 < z < 0.658), found in pilot observations. Finally, we discuss two potential science applications of the SPIDERS sample: the study of the X-ray luminosity-velocity dispersion (LX–σ) relation and the building of stacked phase-space diagrams.« less

  9. SPIDERS: the spectroscopic follow-up of X-ray selected clusters of galaxies in SDSS-IV

    NASA Astrophysics Data System (ADS)

    Clerc, N.; Merloni, A.; Zhang, Y.-Y.; Finoguenov, A.; Dwelly, T.; Nandra, K.; Collins, C.; Dawson, K.; Kneib, J.-P.; Rozo, E.; Rykoff, E.; Sadibekova, T.; Brownstein, J.; Lin, Y.-T.; Ridl, J.; Salvato, M.; Schwope, A.; Steinmetz, M.; Seo, H.-J.; Tinker, J.

    2016-12-01

    SPIDERS (The SPectroscopic IDentification of eROSITA Sources) is a programme dedicated to the homogeneous and complete spectroscopic follow-up of X-ray active galactic nuclei and galaxy clusters over a large area (˜7500 deg2) of the extragalactic sky. SPIDERS is part of the Sloan Digital Sky Survey (SDSS)-IV project, together with the Extended Baryon Oscillation Spectroscopic Survey and the Time-Domain Spectroscopic Survey. This paper describes the largest project within SPIDERS before the launch of eROSITA: an optical spectroscopic survey of X-ray-selected, massive (˜1014-1015 M⊙) galaxy clusters discovered in ROSAT and XMM-Newton imaging. The immediate aim is to determine precise (Δz ˜ 0.001) redshifts for 4000-5000 of these systems out to z ˜ 0.6. The scientific goal of the program is precision cosmology, using clusters as probes of large-scale structure in the expanding Universe. We present the cluster samples, target selection algorithms and observation strategies. We demonstrate the efficiency of selecting targets using a combination of SDSS imaging data, a robust red-sequence finder and a dedicated prioritization scheme. We describe a set of algorithms and work-flow developed to collate spectra and assign cluster membership, and to deliver catalogues of spectroscopically confirmed clusters. We discuss the relevance of line-of-sight velocity dispersion estimators for the richer systems. We illustrate our techniques by constructing a catalogue of 230 spectroscopically validated clusters (0.031 < z < 0.658), found in pilot observations. We discuss two potential science applications of the SPIDERS sample: the study of the X-ray luminosity-velocity dispersion (LX-σ) relation and the building of stacked phase-space diagrams.

  10. 3D "spectracoustic" system: a modular, tomographic, spectroscopic mapping imaging, non-invasive, diagnostic system for detection of small starting developing tumors like melanoma

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios

    2017-03-01

    This work led to a new method named 3D spectracoustic tomographic mapping imaging. The current and the future work is related to the fabrication of a combined acoustic microscopy transducer and infrared illumination probe permitting the simultaneous acquisition of the spectroscopic and the tomographic information. This probe provides with the capability of high fidelity and precision registered information from the combined modalities named spectracoustic information.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen

    The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less

  12. Diagnosing breast cancer using Raman spectroscopy: prospective analysis

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Volynskaya, Zoya; Gardecki, Joseph A.; Nazemi, Jon; Shenk, Robert; Wang, Nancy; Dasari, Ramachandra R.; Fitzmaurice, Maryann; Feld, Michael S.

    2009-09-01

    We present the first prospective test of Raman spectroscopy in diagnosing normal, benign, and malignant human breast tissues. Prospective testing of spectral diagnostic algorithms allows clinicians to accurately assess the diagnostic information contained in, and any bias of, the spectroscopic measurement. In previous work, we developed an accurate, internally validated algorithm for breast cancer diagnosis based on analysis of Raman spectra acquired from fresh-frozen in vitro tissue samples. We currently evaluate the performance of this algorithm prospectively on a large ex vivo clinical data set that closely mimics the in vivo environment. Spectroscopic data were collected from freshly excised surgical specimens, and 129 tissue sites from 21 patients were examined. Prospective application of the algorithm to the clinical data set resulted in a sensitivity of 83%, a specificity of 93%, a positive predictive value of 36%, and a negative predictive value of 99% for distinguishing cancerous from normal and benign tissues. The performance of the algorithm in different patient populations is discussed. Sources of bias in the in vitro calibration and ex vivo prospective data sets, including disease prevalence and disease spectrum, are examined and analytical methods for comparison provided.

  13. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  14. HITRAN Application Programming Interface (hapi): Extending HITRAN Capabilities

    NASA Astrophysics Data System (ADS)

    Kochanov, Roman V.; Gordon, Iouli E.; Rothman, Laurence S.; Wcislo, Piotr; Hill, Christian; Wilzewski, Jonas

    2016-06-01

    In this talk we present an update on the HITRAN Application Programming Interface (HAPI). HAPI is a free Python library providing a flexible set of tools to work with the most up-to-date spectroscopic data provided by HITRANonline (www.hitran.org) HAPI gives access to the spectroscopic parameters which are continuously being added to HITRANonline. For instance, these include non-Voigt profile parameters, foreign broadenings and shifts, and line mixing. HAPI enables more accurate spectra calculations for the spectroscopic and astrophysical applications requiring the detailed modeling of the broadener. HAPI implements an expert algorithm for the line profile selection for a single-layer radiative transfer calculation, and can be extended by custom line profiles and algorithms of their calculations, partition sums, instrumental functions, and temperature and pressure dependences. Possible HAPI applications include spectroscopic data validation and analysis as well as radiative-transfer calculations, experiment verification and spectroscopic code benchmarking. Kochanov RV, Gordon IE, et al. Submitted to JQSRT HighRus Special Issue 2016 Kochanov RV, Hill C, et al. ISMS 2015. http://hdl.handle.net/2142/79241 Hill C, Gordon IE, et al. Accepted to JQSRT HighRus Special Issue 2016. Wcislo P, Gordon IE, et al. Accepted to JQSRT HighRus Special Issue 2016. Wilzewski JS, Gordon IE, et al. JQSRT 2016;168:193-206. Kochanov RV, Gordon IE, et al. Clim Past 2015;11:1097-105.

  15. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  16. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  17. Balancing Contention and Synchronization on the Intel Paragon

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Nicol, David M.

    1996-01-01

    The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.

  18. Vibrational monitor of early demineralization in tooth enamel after in vitro exposure to phosphoridic liquid

    NASA Astrophysics Data System (ADS)

    Pezzotti, Giuseppe; Adachi, Tetsuya; Gasparutti, Isabella; Vincini, Giulio; Zhu, Wenliang; Boffelli, Marco; Rondinella, Alfredo; Marin, Elia; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2017-02-01

    The Raman spectroscopic method has been applied to quantitatively assess the in vitro degree of demineralization in healthy human teeth. Based on previous evaluations of Raman selection rules (empowered by an orientation distribution function (ODF) statistical algorithm) and on a newly proposed analysis of phonon density of states (PDOS) for selected vibrational modes of the hexagonal structure of hydroxyapatite, a molecular-scale evaluation of the demineralization process upon in vitro exposure to a highly acidic beverage (i.e., CocaCola™ Classic, pH = 2.5) could be obtained. The Raman method proved quite sensitive and spectroscopic features could be directly related to an increase in off-stoichiometry of the enamel surface structure since the very early stage of the demineralization process (i.e., when yet invisible to other conventional analytical techniques). The proposed Raman spectroscopic algorithm might possess some generality for caries risk assessment, allowing a prompt non-contact diagnostic practice in dentistry.

  19. Unsupervised learning of structure in spectroscopic cubes

    NASA Astrophysics Data System (ADS)

    Araya, M.; Mendoza, M.; Solar, M.; Mardones, D.; Bayo, A.

    2018-07-01

    We consider the problem of analyzing the structure of spectroscopic cubes using unsupervised machine learning techniques. We propose representing the target's signal as a homogeneous set of volumes through an iterative algorithm that separates the structured emission from the background while not overestimating the flux. Besides verifying some basic theoretical properties, the algorithm is designed to be tuned by domain experts, because its parameters have meaningful values in the astronomical context. Nevertheless, we propose a heuristic to automatically estimate the signal-to-noise ratio parameter of the algorithm directly from data. The resulting light-weighted set of samples (≤ 1% compared to the original data) offer several advantages. For instance, it is statistically correct and computationally inexpensive to apply well-established techniques of the pattern recognition and machine learning domains; such as clustering and dimensionality reduction algorithms. We use ALMA science verification data to validate our method, and present examples of the operations that can be performed by using the proposed representation. Even though this approach is focused on providing faster and better analysis tools for the end-user astronomer, it also opens the possibility of content-aware data discovery by applying our algorithm to big data.

  20. Recent advances and remaining challenges for the spectroscopic detection of explosive threats.

    PubMed

    Fountain, Augustus W; Christesen, Steven D; Moon, Raphael P; Guicheteau, Jason A; Emmons, Erik D

    2014-01-01

    In 2010, the U.S. Army initiated a program through the Edgewood Chemical Biological Center to identify viable spectroscopic signatures of explosives and initiate environmental persistence, fate, and transport studies for trace residues. These studies were ultimately designed to integrate these signatures into algorithms and experimentally evaluate sensor performance for explosives and precursor materials in existing chemical point and standoff detection systems. Accurate and validated optical cross sections and signatures are critical in benchmarking spectroscopic-based sensors. This program has provided important information for the scientists and engineers currently developing trace-detection solutions to the homemade explosive problem. With this information, the sensitivity of spectroscopic methods for explosives detection can now be quantitatively evaluated before the sensor is deployed and tested.

  1. [Algorithm for the automated processing of rheosignals].

    PubMed

    Odinets, G S

    1988-01-01

    Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.

  2. Quantitative analysis of terahertz spectra for illicit drugs using adaptive-range micro-genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin

    2011-08-01

    In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.

  3. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  4. Spectroscopic methods for the photodiagnosis of nonmelanoma skin cancer.

    PubMed

    Drakaki, Eleni; Vergou, Theognosia; Dessinioti, Clio; Stratigos, Alexander J; Salavastru, Carmen; Antoniou, Christina

    2013-06-01

    The importance of dermatological noninvasive imaging techniques has increased over the last decades, aiming at diagnosing nonmelanoma skin cancer (NMSC). Technological progress has led to the development of various analytical tools, enabling the in vivo/in vitro examination of lesional human skin with the aim to increase diagnostic accuracy and decrease morbidity and mortality. The structure of the skin layers, their chemical composition, and the distribution of their compounds permits the noninvasive photodiagnosis of skin diseases, such as skin cancers, especially for early stages of malignant tumors. An important role in the dermatological diagnosis and disease monitoring has been shown for promising spectroscopic and imaging techniques, such as fluorescence, diffuse reflectance, Raman and near-infrared spectroscopy, optical coherence tomography, and confocal laser-scanning microscopy. We review the use of these spectroscopic techniques as noninvasive tools for the photodiagnosis of NMSC.

  5. Spectroscopic methods for the photodiagnosis of nonmelanoma skin cancer

    NASA Astrophysics Data System (ADS)

    Drakaki, Eleni; Vergou, Theognosia; Dessinioti, Clio; Stratigos, Alexander J.; Salavastru, Carmen; Antoniou, Christina

    2013-06-01

    The importance of dermatological noninvasive imaging techniques has increased over the last decades, aiming at diagnosing nonmelanoma skin cancer (NMSC). Technological progress has led to the development of various analytical tools, enabling the in vivo/in vitro examination of lesional human skin with the aim to increase diagnostic accuracy and decrease morbidity and mortality. The structure of the skin layers, their chemical composition, and the distribution of their compounds permits the noninvasive photodiagnosis of skin diseases, such as skin cancers, especially for early stages of malignant tumors. An important role in the dermatological diagnosis and disease monitoring has been shown for promising spectroscopic and imaging techniques, such as fluorescence, diffuse reflectance, Raman and near-infrared spectroscopy, optical coherence tomography, and confocal laser-scanning microscopy. We review the use of these spectroscopic techniques as noninvasive tools for the photodiagnosis of NMSC.

  6. Toward an efficient Photometric Supernova Classifier

    NASA Astrophysics Data System (ADS)

    McClain, Bradley

    2018-01-01

    The Sloan Digital Sky Survey Supernova Survey (SDSS) discovered more than 1,000 Type Ia Supernovae, yet less than half of these have spectroscopic measurements. As wide-field imaging telescopes such as The Dark Energy Survey (DES) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) discover more supernovae, the need for accurate and computationally cheap photometric classifiers increases. My goal is to use a photometric classification algorithm based on Sncosmo, a python library for supernova cosmology analysis, to reclassify previously identified Hubble SN and other non-spectroscopically confirmed surveys. My results will be compared to other photometric classifiers such as PSNID and STARDUST. In the near future, I expect to have the algorithm validated with simulated data, optimized for efficiency, and applied with high performance computing to real data.

  7. WIYN Open Cluster Study. XXXVI. Spectroscopic Binary Orbits in NGC 188

    DTIC Science & Technology

    2009-04-01

    2000; Pleiades , Mermilliod et al. 1992; M67, Mathieu et al. 1990). Today, the advent of multi-object spectrographs permits surveys of larger stellar...open clusters (e.g., M67, Mathieu et al. (1990); Praesepe, Mermilliod et al. (1994); Pleiades , Bouvier et al. (1997); Hyades, Patience et al. (1998

  8. Spectroscopic techniques to study the immune response in human saliva

    NASA Astrophysics Data System (ADS)

    Nepomnyashchaya, E.; Savchenko, E.; Velichko, E.; Bogomaz, T.; Aksenov, E.

    2018-01-01

    Studies of the immune response dynamics by means of spectroscopic techniques, i.e., laser correlation spectroscopy and fluorescence spectroscopy, are described. The laser correlation spectroscopy is aimed at measuring sizes of particles in biological fluids. The fluorescence spectroscopy allows studying of the conformational and other structural changings in immune complex. We have developed a new scheme of a laser correlation spectrometer and an original signal processing algorithm. We have suggested a new fluorescence detection scheme based on a prism and an integrating pin diode. The developed system based on the spectroscopic techniques allows studies of complex process in human saliva and opens some prospects for an individual treatment of immune diseases.

  9. Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Malin, Martha J.

    Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.

  10. A real-time spectroscopic sensor for monitoring laser welding processes.

    PubMed

    Sibillano, Teresa; Ancona, Antonio; Berardi, Vincenzo; Lugarà, Pietro Mario

    2009-01-01

    In this paper we report on the development of a sensor for real time monitoring of laser welding processes based on spectroscopic techniques. The system is based on the acquisition of the optical spectra emitted from the laser generated plasma plume and their use to implement an on-line algorithm for both the calculation of the plasma electron temperature and the analysis of the correlations between selected spectral lines. The sensor has been patented and it is currently available on the market.

  11. Mm-Wave Spectroscopic Sensors, Catalogs, and Uncatalogued Lines

    NASA Astrophysics Data System (ADS)

    Medvedev, Ivan; Neese, Christopher F.; De Lucia, Frank C.

    2014-06-01

    Analytical chemical sensing based on high resolution rotational molecular spectra has been recognized as a viable technique for decades. We recently demonstrated a compact implementation of such a sensor. Future generations of these sensors will rely on automated algorithms for quantification of chemical dilutions based on their spectral libraries, as well as identification of spectral features not present in spectral catalogs. Here we present an algorithm aimed at detection of unidentified lines in complex molecular species based on spectroscopic libraries developed in our previous projects. We will discuss the approaches suitable for data mining in feature-rich rotational molecular spectra. Neese, C.F., I.R. Medvedev, G.M. Plummer, A.J. Frank, C.D. Ball, and F.C. De Lucia, "A Compact Submillimeter/Terahertz Gas Sensor with Efficient Gas Collection, Preconcentration, and ppt Sensitivity." Sensors Journal, IEEE, 2012. 12(8): p. 2565-2574

  12. A survey of provably correct fault-tolerant clock synchronization techniques

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1988-01-01

    Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.

  13. SDSS-IV eBOSS emission-line galaxy pilot survey

    DOE PAGES

    Comparat, J.; Delubac, T.; Jouvel, S.; ...

    2016-08-09

    The Sloan Digital Sky Survey IV extended Baryonic Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) will observe 195,000 emission-line galaxies (ELGs) to measure the Baryonic Acoustic Oscillation standard ruler (BAO) at redshift 0.9. To test different ELG selection algorithms, 9,000 spectra were observed with the SDSS spectrograph as a pilot survey based on data from several imaging surveys. First, using visual inspection and redshift quality flags, we show that the automated spectroscopic redshifts assigned by the pipeline meet the quality requirements for a reliable BAO measurement. We also show the correlations between sky emission, signal-to-noise ratio in the emission lines, and redshift error.more » Then we provide a detailed description of each target selection algorithm we tested and compare them with the requirements of the eBOSS experiment. As a result, we provide reliable redshift distributions for the different target selection schemes we tested. Lastly, we determine an target selection algorithms that is best suited to be applied on DECam photometry because they fulfill the eBOSS survey efficiency requirements.« less

  14. Detecting outliers and learning complex structures with large spectroscopic surveys - a case study with APOGEE stars

    NASA Astrophysics Data System (ADS)

    Reis, Itamar; Poznanski, Dovi; Baron, Dalya; Zasowski, Gail; Shahaf, Sahar

    2018-05-01

    In this work, we apply and expand on a recently introduced outlier detection algorithm that is based on an unsupervised random forest. We use the algorithm to calculate a similarity measure for stellar spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE). We show that the similarity measure traces non-trivial physical properties and contains information about complex structures in the data. We use it for visualization and clustering of the data set, and discuss its ability to find groups of highly similar objects, including spectroscopic twins. Using the similarity matrix to search the data set for objects allows us to find objects that are impossible to find using their best-fitting model parameters. This includes extreme objects for which the models fail, and rare objects that are outside the scope of the model. We use the similarity measure to detect outliers in the data set, and find a number of previously unknown Be-type stars, spectroscopic binaries, carbon rich stars, young stars, and a few that we cannot interpret. Our work further demonstrates the potential for scientific discovery when combining machine learning methods with modern survey data.

  15. Infrared Spectroscopic Imaging: The Next Generation

    PubMed Central

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  16. Combined spectroscopic imaging and chemometric approach for automatically partitioning tissue types in human prostate tissue biopsies

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil

    2001-07-01

    We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.

  17. Stellar Astrophysics with a Dispersed Fourier Transform Spectrograph. II. Orbits of Double-lined Spectroscopic Binaries

    NASA Astrophysics Data System (ADS)

    Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.; McMillan, Robert S.; Murison, Marc; Meade, Jeff; Hindsley, Robert

    2011-07-01

    We present orbital parameters for six double-lined spectroscopic binaries (ι Pegasi, ω Draconis, 12 Boötis, V1143 Cygni, β Aurigae, and Mizar A) and two double-lined triple star systems (κ Pegasi and η Virginis). The orbital fits are based upon high-precision radial velocity (RV) observations made with a dispersed Fourier Transform Spectrograph, or dFTS, a new instrument that combines interferometric and dispersive elements. For some of the double-lined binaries with known inclination angles, the quality of our RV data permits us to determine the masses M 1 and M 2 of the stellar components with relative errors as small as 0.2%.

  18. redMaGiC: selecting luminous red galaxies from the DES Science Verification data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozo, E.

    We introduce redMaGiC, an automated algorithm for selecting Luminous Red Galaxies (LRGs). The algorithm was developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the color-cuts necessary to produce a luminosity-thresholded LRG sam- ple of constant comoving density. Additionally, we demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine-learning based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalog sampling the redshiftmore » range z ϵ [0.2,0.8]. Our fiducial sample has a comoving space density of 10 -3 (h -1Mpc) -3, and a median photo-z bias (z spec z photo) and scatter (σ z=(1 + z)) of 0.005 and 0.017 respectively.The corresponding 5σ outlier fraction is 1.4%. We also test our algorithm with Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8) and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1% level.« less

  19. Material parameter estimation with terahertz time-domain spectroscopy.

    PubMed

    Dorney, T D; Baraniuk, R G; Mittleman, D M

    2001-07-01

    Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.

  20. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  1. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  2. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  3. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  4. Noninvasive glucose monitoring by optical reflective and thermal emission spectroscopic measurements

    NASA Astrophysics Data System (ADS)

    Saetchnikov, V. A.; Tcherniavskaia, E. A.; Schiffner, G.

    2005-08-01

    Noninvasive method for blood glucose monitoring in cutaneous tissue based on reflective spectrometry combined with a thermal emission spectroscopy has been developed. Regression analysis, neural network algorithms and cluster analysis are used for data processing.

  5. The Hydrothermal Diamond Anvil Cell (HDAC) for raman spectroscopic studies of geologic fluids at high pressures and temperatures

    USGS Publications Warehouse

    Schmidt, Christian; Chou, I-Ming; Dubessy, Jean; Caumon, Marie-Camille; Pérez, Fernando Rull

    2012-01-01

    In this chapter, we describe the hydrothermal diamond-anvil cell (HDAC), which is specifically designed for experiments on systems with aqueous fluids to temperatures up to ⬚~1000ºC and pressures up to a few GPa to tens of GPa. This cell permits optical observation of the sample and the in situ determination of properties by ‘photon-in photon-out’ techniques such as Raman spectroscopy. Several methods for pressure measurement are discussed in detail including the Raman spectroscopic pressure sensors a-quartz, berlinite, zircon, cubic boron nitride (c-BN), and 13C-diamond, the fluorescence sensors ruby (α-Al2O3:Cr3+), Sm:YAG (Y3Al5O12:Sm3+) and SrB4O7:Sm2+, and measurements of phase-transition temperatures. Furthermore, we give an overview of published Raman spectroscopic studies of geological fluids to high pressures and temperatures, in which diamond anvil cells were applied.

  6. Chapter 7: The hydrothermal diamond anvil cell (HDAC) for Raman spectroscopic studies of geological fluids at high pressures and temperatures

    USGS Publications Warehouse

    Schmidt, Christian; Chou, I-Ming; Dubessy, J.; Caumon, M.-C.; Rull, F.

    2012-01-01

    In this chapter, we describe the hydrothermal diamond-anvil cell (HDAC), which is specifically designed for experiments on systems with aqueous fluids to temperatures up to ~1000ºC and pressures up to a few GPa to tens of GPa. This cell permits optical observation of the sample and the in situ determination of properties by ‘photon-in photon-out’ techniques such as Raman spectroscopy. Several methods for pressure measurement are discussed in detail including the Raman spectroscopic pressure sensors a-quartz, berlinite, zircon, cubic boron nitride (c-BN), and 13C-diamond, the fluorescence sensors ruby (α-Al2O3:Cr3+), Sm:YAG (Y3Al5O12:Sm3+) and SrB4O7:Sm2+, and measurements of phase-transition temperatures. Furthermore, we give an overview of published Raman spectroscopic studies of geological fluids to high pressures and temperatures, in which diamond anvil cells were applied.

  7. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  8. 76 FR 59754 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... priority allocation algorithm for the SPXPM option class,\\5\\ subject to certain conditions. \\5\\ SPXPM is... algorithm in effect for the class, subject to various conditions set forth in subparagraphs (b)(3)(A... permit the allocation algorithm in effect for AIM in the SPXPM option class to be the price-time priority...

  9. Novel Semi-Parametric Algorithm for Interference-Immune Tunable Absorption Spectroscopy Gas Sensing

    PubMed Central

    Michelucci, Umberto; Venturini, Francesca

    2017-01-01

    One of the most common limits to gas sensor performance is the presence of unwanted interference fringes arising, for example, from multiple reflections between surfaces in the optical path. Additionally, since the amplitude and the frequency of these interferences depend on the distance and alignment of the optical elements, they are affected by temperature changes and mechanical disturbances, giving rise to a drift of the signal. In this work, we present a novel semi-parametric algorithm that allows the extraction of a signal, like the spectroscopic absorption line of a gas molecule, from a background containing arbitrary disturbances, without having to make any assumption on the functional form of these disturbances. The algorithm is applied first to simulated data and then to oxygen absorption measurements in the presence of strong fringes.To the best of the authors’ knowledge, the algorithm enables an unprecedented accuracy particularly if the fringes have a free spectral range and amplitude comparable to those of the signal to be detected. The described method presents the advantage of being based purely on post processing, and to be of extremely straightforward implementation if the functional form of the Fourier transform of the signal is known. Therefore, it has the potential to enable interference-immune absorption spectroscopy. Finally, its relevance goes beyond absorption spectroscopy for gas sensing, since it can be applied to any kind of spectroscopic data. PMID:28991161

  10. redMaGiC: Selecting luminous red galaxies from the DES Science Verification data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozo, E.; Rykoff, E. S.; Abate, A.

    Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less

  11. redMaGiC: Selecting luminous red galaxies from the DES Science Verification data

    DOE PAGES

    Rozo, E.; Rykoff, E. S.; Abate, A.; ...

    2016-05-30

    Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less

  12. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  13. Accurate ab initio quartic force fields for borane and BeH2

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.

    1992-01-01

    The quartic force fields of BH3 and BeH2 have been computed ab initio using an augmented coupled cluster (CCSD(T)) method and basis sets of spdf and spdfg quality. For BH3, the computed spectroscopic constants are in very good agreement with recent experimental data, and definitively confirm misassignments in some older work, in agreement with recent ab initio studies. Using the computed spectroscopic constants, the rovibrational partition function for both molecules has been constructed using a modified direct numerical summation algorithm, and JANAF-style thermochemical tables are presented.

  14. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  15. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  16. Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, R.; Thienel, J.; Oshman, Yaakov

    2004-01-01

    This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  17. Cosmology with the cosmic web

    NASA Astrophysics Data System (ADS)

    Forero-Romero, J. E.

    2017-07-01

    This talk summarizes different algorithms that can be used to trace the cosmic web both in simulations and observations. We present different applications in galaxy formation and cosmology. To finalize, we show how the Dark Energy Spectroscopic Instrument (DESI) could be a good place to apply these techniques.

  18. Correcting STIS CCD Point-Source Spectra for CTE Loss

    NASA Technical Reports Server (NTRS)

    Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus

    2006-01-01

    We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.

  19. High resolution spectroscopic mapping imaging applied in situ to multilayer structures for stratigraphic identification of painted art objects

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios Th.

    2016-04-01

    The development of non-destructive techniques is a reality in the field of conservation science. These techniques are usually not so accurate, as the analytical micro-sampling techniques, however, the proper development of soft-computing techniques can improve their accuracy. In this work, we propose a real-time fast acquisition spectroscopic mapping imaging system that operates from the ultraviolet to mid infrared (UV/Vis/nIR/mIR) area of the electromagnetic spectrum and it is supported by a set of soft-computing methods to identify the materials that exist in a stratigraphic structure of paint layers. Particularly, the system acquires spectra in diffuse-reflectance mode, scanning in a Region-Of-Interest (ROI), and having wavelength range from 200 up to 5000 nm. Also, a fuzzy c-means clustering algorithm, i.e., the particular soft-computing algorithm, produces the mapping images. The evaluation of the method was tested on a byzantine painted icon.

  20. Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.

    ERIC Educational Resources Information Center

    Mostafa, J.; Lam, W.

    2000-01-01

    Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…

  1. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  2. The SDSS-IV extended baryon oscillation spectroscopic survey: Luminous red galaxy target selection

    DOE PAGES

    Prakash, Abhishek; Licquia, Timothy C.; Newman, Jeffrey A.; ...

    2016-06-08

    Here, we describe the algorithm used to select the luminous red galaxy (LRG) sample for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV) using photometric data from both the SDSS and the Wide-field Infrared Survey Explorer. LRG targets are required to meet a set of color selection criteria and have z-band and i-band MODEL magnitudes z < 19.95 and 19.9 < i < 21.8, respectively. Our algorithm selects roughly 50 LRG targets per square degree, the great majority of which lie in the redshift range 0.6 < z < 1.0 (median redshift 0.71).more » We also demonstrate that our methods are highly effective at eliminating stellar contamination and lower-redshift galaxies. We perform a number of tests using spectroscopic data from SDSS-III/BOSS ancillary programs to determine the redshift reliability of our target selection and its ability to meet the science requirements of eBOSS. The SDSS spectra are of high enough signal-to-noise ratio that at least ~89% of the target sample yields secure redshift measurements. Finally, we present tests of the uniformity and homogeneity of the sample, demonstrating that it should be clean enough for studies of the large-scale structure of the universe at higher redshifts than SDSS-III/BOSS LRGs reached.« less

  3. THE SDSS-IV EXTENDED BARYON OSCILLATION SPECTROSCOPIC SURVEY: LUMINOUS RED GALAXY TARGET SELECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, Abhishek; Licquia, Timothy C.; Newman, Jeffrey A.

    2016-06-01

    We describe the algorithm used to select the luminous red galaxy (LRG) sample for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV) using photometric data from both the SDSS and the Wide-field Infrared Survey Explorer . LRG targets are required to meet a set of color selection criteria and have z -band and i -band MODEL magnitudes z < 19.95 and 19.9 < i < 21.8, respectively. Our algorithm selects roughly 50 LRG targets per square degree, the great majority of which lie in the redshift range 0.6 < z < 1.0 (medianmore » redshift 0.71). We demonstrate that our methods are highly effective at eliminating stellar contamination and lower-redshift galaxies. We perform a number of tests using spectroscopic data from SDSS-III/BOSS ancillary programs to determine the redshift reliability of our target selection and its ability to meet the science requirements of eBOSS. The SDSS spectra are of high enough signal-to-noise ratio that at least ∼89% of the target sample yields secure redshift measurements. We also present tests of the uniformity and homogeneity of the sample, demonstrating that it should be clean enough for studies of the large-scale structure of the universe at higher redshifts than SDSS-III/BOSS LRGs reached.« less

  4. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  5. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Firefly as a novel swarm intelligence variable selection method in spectroscopy.

    PubMed

    Goodarzi, Mohammad; dos Santos Coelho, Leandro

    2014-12-10

    A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.

  7. A Novel Sky-Subtraction Method Based on Non-negative Matrix Factorisation with Sparsity for Multi-object Fibre Spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Long; Ye, Zhongfu

    2016-12-01

    A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.

  8. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2009-05-01

    OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.

  9. An Algorithm for the Calculation of Exact Term Discrimination Values.

    ERIC Educational Resources Information Center

    Willett, Peter

    1985-01-01

    Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…

  10. Time-frequency analysis in optical coherence tomography for technical objects examination

    NASA Astrophysics Data System (ADS)

    StrÄ kowski, Marcin R.; Kraszewski, Maciej; Trojanowski, Michał; Pluciński, Jerzy

    2014-05-01

    Optical coherence tomography (OCT) is one of the most advanced optical measurement techniques for complex structure visualization. The advantages of OCT have been used for surface and subsurface defect detection in composite materials, polymers, ceramics, non-metallic protective coatings, and many more. Our research activity has been focused on timefrequency spectroscopic analysis in OCT. It is based on time resolved spectral analysis of the backscattered optical signal delivered by the OCT. The time-frequency method gives spectral characteristic of optical radiation backscattered or backreflected from the particular points inside the tested device. This provides more information about the sample, which are useful for further analysis. Nowadays, the applications of spectroscopic analysis for composite layers characterization or tissue recognition have been reported. During our studies we have found new applications of spectroscopic analysis. We have used this method for thickness estimation of thin films, which are under the resolution of OCT. Also, we have combined the spectroscopic analysis with polarization sensitive OCT (PS-OCT). This approach enables to obtain a multiorder retardation value directly and may become a breakthrough in PS-OCT measurements of highly birefringent media. In this work, we present the time-frequency spectroscopic algorithms and their applications for OCT. Also, the theoretical simulations and measurement validation of this method are shown.

  11. Laser synchronized high-speed shutter for spectroscopic application

    DOEpatents

    Miles, Paul C.; Porter, Eldon L.; Prast, Thomas L.; Sunnarborg, Duane A.

    2002-01-01

    A fast mechanical shutter, based on rotating chopper wheels, has been designed and implemented to shutter the entrance slit of a spectrograph. This device enables an exposure time of 9 .mu.s to be achieved for a 0.8 mm wide spectrograph entrance slit, achieves 100% transmission in the open state, and an essentially infinite extinction ratio. The device further incorporates chopper wheel position sensing electronics to permit the synchronous triggering of a laser source.

  12. A Search for Companions to the Pulsating sdB Star EC 20117-4014

    NASA Astrophysics Data System (ADS)

    Otani, T.; Oswalt, T.; Amaral, M.; Jordan, R.

    2017-03-01

    EC 20117-4014 is known to be a spectroscopic binary system consisting of an sdB star and F5V star. It was monitored using the SARA-CT telescope in Cerro Tololo, Chile over several observing seasons. Periodic O-C variations were detected in the two highest amplitude pulsations in EC 20117-4014, permitting detection of the F5V companion, whose period and semimajor axis were previously unknown.

  13. The ALBA spectroscopic LEEM-PEEM experimental station: layout and performance

    PubMed Central

    Aballe, Lucia; Foerster, Michael; Pellegrin, Eric; Nicolas, Josep; Ferrer, Salvador

    2015-01-01

    The spectroscopic LEEM-PEEM experimental station at the CIRCE helical undulator beamline, which started user operation at the ALBA Synchrotron Light Facility in 2012, is presented. This station, based on an Elmitec LEEM III microscope with electron imaging energy analyzer, permits surfaces to be imaged with chemical, structural and magnetic sensitivity down to a lateral spatial resolution better than 20 nm with X-ray excited photoelectrons and 10 nm in LEEM and UV-PEEM modes. Rotation around the surface normal and application of electric and (weak) magnetic fields are possible in the microscope chamber. In situ surface preparation capabilities include ion sputtering, high-temperature flashing, exposure to gases, and metal evaporation with quick evaporator exchange. Results from experiments in a variety of fields and imaging modes will be presented in order to illustrate the ALBA XPEEM capabilities. PMID:25931092

  14. A stochastic conflict resolution model for trading pollutant discharge permits in river systems.

    PubMed

    Niksokhan, Mohammad Hossein; Kerachian, Reza; Amin, Pedram

    2009-07-01

    This paper presents an efficient methodology for developing pollutant discharge permit trading in river systems considering the conflict of interests of involving decision-makers and the stakeholders. In this methodology, a trade-off curve between objectives is developed using a powerful and recently developed multi-objective genetic algorithm technique known as the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The best non-dominated solution on the trade-off curve is defined using the Young conflict resolution theory, which considers the utility functions of decision makers and stakeholders of the system. These utility functions are related to the total treatment cost and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. Finally, an optimization model provides the trading discharge permit policies. The practical utility of the proposed methodology in decision-making is illustrated through a realistic example of the Zarjub River in the northern part of Iran.

  15. The role of simulation chambers in the development of spectroscopic techniques: campaigns at EUPHORE

    NASA Astrophysics Data System (ADS)

    Ródenas, Milagros; Muñoz, Amalia; Euphore Team

    2016-04-01

    Simulation chambers represent a very useful tool for the study of chemical reactions and their products, but also to characterize instruments. The development of spectroscopic techniques throughout the last decades has benefited from tests and intercomparison exercises carried out in chambers. In fact, instruments can be exposed to various controlled atmospheric scenarios that account for different environmental conditions, eliminating the uncertainties associated to fluctuations of the air mass, which must be taken into account when extrapolating results to the real conditions. Hence, a given instrument can be characterized by assessing its precision, accuracy, detection limits, time response and potential interferences in the presence of other chemical compounds, aerosols, etc. This implies that the instrument can be calibrated and validated, which allows to enhance the features of the instrument. Moreover, chambers are also the scenario of intercomparison trials, permitting multiple instruments to sample from the same well-mixed air mass simultaneously. An overview of different campaigns to characterize and/or intercompare spectroscopic techniques that have taken place in simulation chambers will be given; in particular, those carried out at EUPHORE (two twin domes, 200 m3 each, Spain), where various intercomparison exercises have been deployed under the frame of European projects (e.g. TOXIC, FIONA, PSOA campaigns supported by EUROCHAMP-II). With the common aim of measuring given compounds (e.g. HONO, NO2, OH, glyoxal, m-glyoxal, etc), an important number of spectroscopic instruments and institutions have been involved in chamber experiments, having the chance to intercompare among them and also with other non-spectroscopic systems (e.g. monitors, cromatographs, etc) or model simulations.

  16. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Hänsch, Theodor W.

    1994-08-01

    The simple hydrogen atom permits unique confrontations between spectroscopic experiment and fundamental theory. The experimental resolution and measurement accuracy continue to improve exponentially. Recent advances include a new measurement of the Lamb shift of the 1S ground state which provides now the most stringent test of QED for an atom and reveals unexpectedly large two-loop binding corrections. The H-D isotope shift of the extremely narrow 1S-2S two-photon resonance is yielding a new value for the structure radius of the deuteron, in agreement with nuclear theory. The Rydberg constant as determined within 3 parts in 1011 by two independent groups has become the most accurately known of any fundamental constant. Advances in the art of absolute optical frequency measurements will permit still more precise experiments in the near future.

  17. In Vivo EPR Resolution Enhancement Using Techniques Known from Quantum Computing Spin Technology.

    PubMed

    Rahimi, Robabeh; Halpern, Howard J; Takui, Takeji

    2017-01-01

    A crucial issue with in vivo biological/medical EPR is its low signal-to-noise ratio, giving rise to the low spectroscopic resolution. We propose quantum hyperpolarization techniques based on 'Heat Bath Algorithmic Cooling', allowing possible approaches for improving the resolution in magnetic resonance spectroscopy and imaging.

  18. Development of quality control and instrumentation performance metrics for diffuse optical spectroscopic imaging instruments in the multi-center clinical environment

    NASA Astrophysics Data System (ADS)

    Keene, Samuel T.; Cerussi, Albert E.; Warren, Robert V.; Hill, Brian; Roblyer, Darren; Leproux, AnaÑ--s.; Durkin, Amanda F.; O'Sullivan, Thomas D.; Haghany, Hosain; Mantulin, William W.; Tromberg, Bruce J.

    2013-03-01

    Instrument equivalence and quality control are critical elements of multi-center clinical trials. We currently have five identical Diffuse Optical Spectroscopic Imaging (DOSI) instruments enrolled in the American College of Radiology Imaging Network (ACRIN, #6691) trial located at five academic clinical research sites in the US. The goal of the study is to predict the response of breast tumors to neoadjuvant chemotherapy in 60 patients. In order to reliably compare DOSI measurements across different instruments, operators and sites, we must be confident that the data quality is comparable. We require objective and reliable methods for identifying, correcting, and rejecting low quality data. To achieve this goal, we developed and tested an automated quality control algorithm that rejects data points below the instrument noise floor, improves tissue optical property recovery, and outputs a detailed data quality report. Using a new protocol for obtaining dark-noise data, we applied the algorithm to ACRIN patient data and successfully improved the quality of recovered physiological data in some cases.

  19. Unbiased clustering estimation in the presence of missing observations

    NASA Astrophysics Data System (ADS)

    Bianchi, Davide; Percival, Will J.

    2017-11-01

    In order to be efficient, spectroscopic galaxy redshift surveys do not obtain redshifts for all galaxies in the population targeted. The missing galaxies are often clustered, commonly leading to a lower proportion of successful observations in dense regions. One example is the close-pair issue for SDSS spectroscopic galaxy surveys, which have a deficit of pairs of observed galaxies with angular separation closer than the hardware limit on placing neighbouring fibres. Spatially clustered missing observations will exist in the next generations of surveys. Various schemes have previously been suggested to mitigate these effects, but none works for all situations. We argue that the solution is to link the missing galaxies to those observed with statistically equivalent clustering properties, and that the best way to do this is to rerun the targeting algorithm, varying the angular position of the observations. Provided that every pair has a non-zero probability of being observed in one realization of the algorithm, then a pair-upweighting scheme linking targets to successful observations, can correct these issues. We present such a scheme, and demonstrate its validity using realizations of an idealized simple survey strategy.

  20. Spectrometer capillary vessel and method of making same

    DOEpatents

    Linehan, John C.; Yonker, Clement R.; Zemanian, Thomas S.; Franz, James A.

    1995-01-01

    The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube.

  1. MHD Turbulence, div B = 0 and Lattice Boltzmann Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Nate; Keating, Brian; Vahala, George; Vahala, Linda

    2006-10-01

    The question of div B = 0 in MHD simulations is a crucial issue. Here we consider lattice Boltzmann simulations for MHD (LB-MHD). One introduces a scalar distribution function for the velocity field and a vector distribution function for the magnetic field. This asymmetry is due to the different symmetries in the tensors arising in the time evolution of these fields. The simple algorithm of streaming and local collisional relaxation is ideally parallelized and vectorized -- leading to the best sustained performance/PE of any code run on the Earth Simulator. By reformulating the BGK collision term, a simple implicit algorithm can be immediately transformed into an explicit algorithm that permits simulations at quite low viscosity and resistivity. However the div B is not an imposed constraint. Currently we are examining a new formulations of LB-MHD that impose the div B constraint -- either through an entropic like formulation or by introducing forcing terms into the momentum equations and permitting simpler forms of relaxation distributions.

  2. Measurement of two-dimensional thickness of micro-patterned thin film based on image restoration in a spectroscopic imaging reflectometer.

    PubMed

    Kim, Min-Gab; Kim, Jin-Yong

    2018-05-01

    In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.

  3. Maximum life spur gear design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.

    1991-01-01

    Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.

  4. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  5. Ex vivo brain tumor analysis using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lenz, Marcel; Krug, Robin; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.

    2016-03-01

    A big challenge during neurosurgeries is to distinguish between healthy tissue and cancerous tissue, but currently a suitable non-invasive real time imaging modality is not available. Optical Coherence Tomography (OCT) is a potential technique for such a modality. OCT has a penetration depth of 1-2 mm and a resolution of 1-15 μm which is sufficient to illustrate structural differences between healthy tissue and brain tumor. Therefore, we investigated gray and white matter of healthy central nervous system and meningioma samples with a Spectral Domain OCT System (Thorlabs Callisto). Additional OCT images were generated after paraffin embedding and after the samples were cut into 10 μm thin slices for histological investigation with a bright field microscope. All samples were stained with Hematoxylin and Eosin. In all cases B-scans and 3D images were made. Furthermore, a camera image of the investigated area was made by the built-in video camera of our OCT system. For orientation, the backsides of all samples were marked with blue ink. The structural differences between healthy tissue and meningioma samples were most pronounced directly after removal. After paraffin embedding these differences diminished. A correlation between OCT en face images and microscopy images can be seen. In order to increase contrast, post processing algorithms were applied. Hence we employed Spectroscopic OCT, pattern recognition algorithms and machine learning algorithms such as k-means Clustering and Principal Component Analysis.

  6. A traveling-salesman-based approach to aircraft scheduling in the terminal area

    NASA Technical Reports Server (NTRS)

    Luenberger, Robert A.

    1988-01-01

    An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.

  7. Spectroscopic imaging of biomaterials and biological systems with FTIR microscopy or with quantum cascade lasers.

    PubMed

    Kimber, James A; Kazarian, Sergei G

    2017-10-01

    Spectroscopic imaging of biomaterials and biological systems has received increased interest within the last decade because of its potential to aid in the detection of disease using biomaterials/biopsy samples and to probe the states of live cells in a label-free manner. The factors behind this increased attention include the availability of improved infrared microscopes and systems that do not require the use of a synchrotron as a light source, as well as the decreasing costs of these systems. This article highlights the current technical challenges and future directions of mid-infrared spectroscopic imaging within this field. Specifically, these are improvements in spatial resolution and spectral quality through the use of novel added lenses and computational algorithms, as well as quantum cascade laser imaging systems, which offer advantages over traditional Fourier transform infrared systems with respect to the speed of acquisition and field of view. Overcoming these challenges will push forward spectroscopic imaging as a viable tool for disease diagnostics and medical research. Graphical abstract Absorbance images of a biopsy obtained using an FTIR imaging microscope with and without an added lens, and also using a QCL microscope with high-NA objective.

  8. The Maunakea Spectroscopic Explorer: Design and Project Status

    NASA Astrophysics Data System (ADS)

    Murowinski, Rick

    2015-08-01

    The Maunakea Spectroscopic Explorer (MSE) will be a 10-m class telescope feeding a dedicated massively-multiplexed multi-object spectrometer. The project formally kicked off in March 2014, with a Project Office hosted at the Canada France Hawaii Telescope's (CFHT's) Waimea office facility. The MSE observatory will be ultimately realized my means of an upgrade to the CFHT telescope and partnership, resulting in a new observatory with forefront transformational capability and serving a new international partnership. This new observatory will be housed within the facade of the current CFHT and using the same summit site that CFHT now occupies. We present a description, and the status, of the project. We will show the level one design choices that have been made and those under consideration. We will show our progress in gaining permitting permission as the first major observatory that will re-use an existing Maunakea telescope site.

  9. The Kinematics of the Permitted C II λ6578 Line in a Large Sample of Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Richer, Michael G.; Suárez, Genaro; López, José Alberto; García Díaz, María Teresa

    2017-03-01

    We present spectroscopic observations of the C II λ6578 permitted line for 83 lines of sight in 76 planetary nebulae at high spectral resolution, most of them obtained with the Manchester Echelle Spectrograph on the 2.1 m telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir. We study the kinematics of the C II λ6578 permitted line with respect to other permitted and collisionally excited lines. Statistically, we find that the kinematics of the C II λ6578 line are not those expected if this line arises from the recombination of C2+ ions or the fluorescence of C+ ions in ionization equilibrium in a chemically homogeneous nebular plasma, but instead its kinematics are those appropriate for a volume more internal than expected. The planetary nebulae in this sample have well-defined morphology and are restricted to a limited range in Hα line widths (no large values) compared to their counterparts in the Milky Way bulge; both these features could be interpreted as the result of young nebular shells, an inference that is also supported by nebular modeling. Concerning the long-standing discrepancy between chemical abundances inferred from permitted and collisionally excited emission lines in photoionized nebulae, our results imply that multiple plasma components occur commonly in planetary nebulae.

  10. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  11. Continuous statistical modelling for rapid detection of adulteration of extra virgin olive oil using mid infrared and Raman spectroscopic data.

    PubMed

    Georgouli, Konstantia; Martinez Del Rincon, Jesus; Koidis, Anastasios

    2017-02-15

    The main objective of this work was to develop a novel dimensionality reduction technique as a part of an integrated pattern recognition solution capable of identifying adulterants such as hazelnut oil in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. A novel Continuous Locality Preserving Projections (CLPP) technique is proposed which allows the modelling of the continuous nature of the produced in-house admixtures as data series instead of discrete points. The maintenance of the continuous structure of the data manifold enables the better visualisation of this examined classification problem and facilitates the more accurate utilisation of the manifold for detecting the adulterants. The performance of the proposed technique is validated with two different spectroscopic techniques (Raman and Fourier transform infrared, FT-IR). In all cases studied, CLPP accompanied by k-Nearest Neighbors (kNN) algorithm was found to outperform any other state-of-the-art pattern recognition techniques. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Time-domain Spectroscopic Survey: Target Selection for Repeat Spectroscopy

    NASA Astrophysics Data System (ADS)

    MacLeod, Chelsea L.; Green, Paul J.; Anderson, Scott F.; Eracleous, Michael; Ruan, John J.; Runnoe, Jessie; Nielsen Brandt, William; Badenes, Carles; Greene, Jenny; Morganson, Eric; Schmidt, Sarah J.; Schwope, Axel; Shen, Yue; Amaro, Rachael; Lebleu, Amy; Filiz Ak, Nurten; Grier, Catherine J.; Hoover, Daniel; McGraw, Sean M.; Dawson, Kyle; Hall, Patrick B.; Hawley, Suzanne L.; Mariappan, Vivek; Myers, Adam D.; Pâris, Isabelle; Schneider, Donald P.; Stassun, Keivan G.; Bershady, Matthew A.; Blanton, Michael R.; Seo, Hee-Jong; Tinker, Jeremy; Fernández-Trincado, J. G.; Chambers, Kenneth; Kaiser, Nick; Kudritzki, R.-P.; Magnier, Eugene; Metcalfe, Nigel; Waters, Chris Z.

    2018-01-01

    As astronomers increasingly exploit the information available in the time domain, spectroscopic variability in particular opens broad new channels of investigation. Here we describe the selection algorithms for all targets intended for repeat spectroscopy in the Time Domain Spectroscopic Survey (TDSS), part of the extended Baryon Oscillation Spectroscopic Survey within the Sloan Digital Sky Survey (SDSS)-IV. Also discussed are the scientific rationale and technical constraints leading to these target selections. The TDSS includes a large “repeat quasar spectroscopy” (RQS) program delivering ∼13,000 repeat spectra of confirmed SDSS quasars, and several smaller “few-epoch spectroscopy” (FES) programs targeting specific classes of quasars as well as stars. The RQS program aims to provide a large and diverse quasar data set for studying variations in quasar spectra on timescales of years, a comparison sample for the FES quasar programs, and an opportunity for discovering rare, serendipitous events. The FES programs cover a wide variety of phenomena in both quasars and stars. Quasar FES programs target broad absorption line quasars, high signal-to-noise ratio normal broad line quasars, quasars with double-peaked or very asymmetric broad emission line profiles, binary supermassive black hole candidates, and the most photometrically variable quasars. Strongly variable stars are also targeted for repeat spectroscopy, encompassing many types of eclipsing binary systems, and classical pulsators like RR Lyrae. Other stellar FES programs allow spectroscopic variability studies of active ultracool dwarf stars, dwarf carbon stars, and white dwarf/M dwarf spectroscopic binaries. We present example TDSS spectra and describe anticipated sample sizes and results.

  13. FT-mid-IR spectroscopic investigation of fiber maturity and crystallinity at single boll level and a comparison with XRD approach

    USDA-ARS?s Scientific Manuscript database

    In previous study, we have reported the development of simple algorithms for determining fiber maturity and crystallinity from Fourier transform (FT) -mid-infrared (IR) measurement. Due to its micro-sampling feature, we were able to assess the fiber maturity and crystallinity at different portions o...

  14. Science of Land Target Spectral Signatures

    DTIC Science & Technology

    2013-04-03

    F. Meriaudeau, T. Downey , A. Wig , A. Passian, M. Buncick, T.L. Ferrell, Fiber optic sensor based on gold island plasmon resonance , Sensors and...processing, detection algorithms, sensor fusion, spectral signature modeling Dr. J. Michael Cathcart Georgia Tech Research Corporation Office of...target detection and sensor fusion. The phenomenology research continued to focus on spectroscopic soil measurements, optical property analyses, field

  15. Applicability of Neural Networks to Etalon Fringe Filtering in Laser Spectrometers

    NASA Technical Reports Server (NTRS)

    Nicely, J. M.; Hanisco, T. F.; Riris, H.

    2018-01-01

    We present a neural network algorithm for spectroscopic retrievals of concentrations of trace gases. Using synthetic data we demonstrate that a neural network is well suited for filtering etalon fringes and provides superior performance to conventional least squares minimization techniques. This novel method can improve the accuracy of atmospheric retrievals and minimize biases.

  16. Applicability of neural networks to etalon fringe filtering in laser spectrometers

    NASA Astrophysics Data System (ADS)

    Nicely, J. M.; Hanisco, T. F.; Riris, H.

    2018-05-01

    We present a neural network algorithm for spectroscopic retrievals of concentrations of trace gases. Using synthetic data we demonstrate that a neural network is well suited for filtering etalon fringes and provides superior performance to conventional least squares minimization techniques. This novel method can improve the accuracy of atmospheric retrievals and minimize biases.

  17. Spectrometer capillary vessel and method of making same

    DOEpatents

    Linehan, J.C.; Yonker, C.R.; Zemanian, T.S.; Franz, J.A.

    1995-11-21

    The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube. 13 figs.

  18. Assessment of amsacrine binding with DNA using UV-visible, circular dichroism and Raman spectroscopic techniques.

    PubMed

    Jangir, Deepak Kumar; Dey, Sanjay Kumar; Kundu, Suman; Mehrotra, Ranjana

    2012-09-03

    Proper understanding of the mechanism of binding of drugs to their targets in cell is a fundamental requirement to develop new drug therapy regimen. Amsacrine is a rationally designed anticancer drug, used to treat leukemia and lymphoma. Binding with cellular DNA is a crucial step in its mechanism of cytotoxicity. Despite numerous studies, DNA binding properties of amsacrine are poorly understood. Its reversible binding with DNA does not permit X-ray crystallography or NMR spectroscopic evaluation of amsacrine-DNA complexes. In the present work, interaction of amsacrine with calf thymus DNA is investigated at physiological conditions. UV-visible, FT-Raman and circular dichroism spectroscopic techniques were employed to determine the binding mode, binding constant, sequence specificity and conformational effects of amsacrine binding to native calf thymus DNA. Our results illustrate that amsacrine interacts with DNA by and large through intercalation between base pairs. Binding constant of the amsacrine-DNA complex was found to be K=1.2±0.1×10(4) M(-1) which is indicative of moderate type of binding of amsacrine to DNA. Raman spectroscopic results suggest that amsacrine has a binding preference of intercalation between AT base pairs of DNA. Minor groove binding is also observed in amsacrine-DNA complexes. These results are in good agreement with in silico investigation of amsacrine binding to DNA and thus provide detailed insight into DNA binding properties of amsacrine, which could ultimately, renders its cytotoxic efficacy. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  20. Optic for industrial endoscope/borescope with narrow field of view and low distortion

    DOEpatents

    Stone, Gary F.; Trebes, James E.

    2005-08-16

    An optic for the imaging optics on the distal end of a flexible fiberoptic endoscope or rigid borescope inspection tool. The image coverage is over a narrow (<20 degrees) field of view with very low optical distortion (<5% pin cushion or barrel distortion), compared to the typical <20% distortion. The optic will permit non-contact surface roughness measurements using optical techniques. This optic will permit simultaneous collection of selected image plane data, which data can then be subsequently optically processed. The image analysis will yield non-contact surface topology data for inspection where access to the surface does not permit a mechanical styles profilometer verification of surface topology. The optic allows a very broad spectral band or range of optical inspection. It is capable of spectroscopic imaging and fluorescence induced imaging when a scanning illumination source is used. The total viewing angle for this optic is 10 degrees for the full field of view of 10 degrees, compared to 40-70 degrees full angle field of view of the conventional gradient index or GRIN's lens systems.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richer, Michael G.; Suárez, Genaro; López, José Alberto

    We present spectroscopic observations of the C ii λ 6578 permitted line for 83 lines of sight in 76 planetary nebulae at high spectral resolution, most of them obtained with the Manchester Echelle Spectrograph on the 2.1 m telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir. We study the kinematics of the C ii λ 6578 permitted line with respect to other permitted and collisionally excited lines. Statistically, we find that the kinematics of the C ii λ 6578 line are not those expected if this line arises from the recombination of C{sup 2+} ions ormore » the fluorescence of C{sup +} ions in ionization equilibrium in a chemically homogeneous nebular plasma, but instead its kinematics are those appropriate for a volume more internal than expected. The planetary nebulae in this sample have well-defined morphology and are restricted to a limited range in H α line widths (no large values) compared to their counterparts in the Milky Way bulge; both these features could be interpreted as the result of young nebular shells, an inference that is also supported by nebular modeling. Concerning the long-standing discrepancy between chemical abundances inferred from permitted and collisionally excited emission lines in photoionized nebulae, our results imply that multiple plasma components occur commonly in planetary nebulae.« less

  2. Coding and decoding for code division multiple user communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  3. Supervised learning of probability distributions by neural networks

    NASA Technical Reports Server (NTRS)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  4. Automatic classification of fluorescence and optical diffusion spectroscopy data in neuro-oncology

    NASA Astrophysics Data System (ADS)

    Savelieva, T. A.; Loshchenov, V. B.; Goryajnov, S. A.; Potapov, A. A.

    2018-04-01

    The complexity of the biological tissue spectroscopic analysis due to the overlap of biological molecules' absorption spectra, multiple scattering effect, as well as measurement geometry in vivo has caused the relevance of this work. In the neurooncology the problem of tumor boundaries delineation is especially acute and requires the development of new methods of intraoperative diagnosis. Methods of optical spectroscopy allow detecting various diagnostically significant parameters non-invasively. 5-ALA induced protoporphyrin IX is frequently used as fluorescent tumor marker in neurooncology. At the same time analysis of the concentration and the oxygenation level of haemoglobin and significant changes of light scattering in tumor tissues have a high diagnostic value. This paper presents an original method for the simultaneous registration of backward diffuse reflectance and fluorescence spectra, which allows defining all the parameters listed above simultaneously. The clinical studies involving 47 patients with intracranial glial tumors of II-IV Grades were carried out in N.N. Burdenko National Medical Research Center of Neurosurgery. To register the spectral dependences the spectroscopic system LESA- 01-BIOSPEC was used with specially developed w-shaped diagnostic fiber optic probe. The original algorithm of combined spectroscopic signal processing was developed. We have created a software and hardware, which allowed (as compared with the methods currently used in neurosurgical practice) to increase the sensitivity of intraoperative demarcation of intracranial tumors from 78% to 96%, specificity of 60% to 82%. The result of analysis of different techniques of automatic classification shows that in our case the most appropriate is the k Nearest Neighbors algorithm with cubic metrics.

  5. Characterization of lipid-rich plaques using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong Soo; Song, Joon Woo; Jang, Sun-Joo; Lee, Jae Joong; Oh, Wang-Yuhl; Kim, Jin Won; Yoo, Hongki

    2016-07-01

    Intravascular optical coherence tomography (IV-OCT) is a high-resolution imaging method used to visualize the internal structures of walls of coronary arteries in vivo. However, accurate characterization of atherosclerotic plaques with gray-scale IV-OCT images is often limited by various intrinsic artifacts. In this study, we present an algorithm for characterizing lipid-rich plaques with a spectroscopic OCT technique based on a Gaussian center of mass (GCOM) metric. The GCOM metric, which reflects the absorbance properties of lipids, was validated using a lipid phantom. In addition, the proposed characterization method was successfully demonstrated in vivo using an atherosclerotic rabbit model and was found to have a sensitivity and specificity of 94.3% and 76.7% for lipid classification, respectively.

  6. Process control using fiber optics and Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kemsley, E. K.; Wilson, Reginald H.

    1992-03-01

    A process control system has been constructed using optical fibers interfaced to a Fourier transform infrared (FT-IR) spectrometer, to achieve remote spectroscopic analysis of food samples during processing. The multichannel interface accommodates six fibers, allowing the sequential observation of up to six samples. Novel fiber-optic sampling cells have been constructed, including transmission and attenuated total reflectance (ATR) designs. Different fiber types have been evaluated; in particular, plastic clad silica (PCS) and zirconium fluoride fibers. Processes investigated have included the dilution of fruit juice concentrate, and the addition of alcohol to fruit syrup. Suitable algorithms have been written which use the results of spectroscopic measurements to control and monitor the course of each process, by actuating devices such as valves and switches.

  7. MARZ: Manual and automatic redshifting software

    NASA Astrophysics Data System (ADS)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  8. Reconstruction of explicit structural properties at the nanoscale via spectroscopic microscopy

    NASA Astrophysics Data System (ADS)

    Cherkezyan, Lusik; Zhang, Di; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2016-02-01

    The spectrum registered by a reflected-light bright-field spectroscopic microscope (SM) can quantify the microscopically indiscernible, deeply subdiffractional length scales within samples such as biological cells and tissues. Nevertheless, quantification of biological specimens via any optical measures most often reveals ambiguous information about the specific structural properties within the studied samples. Thus, optical quantification remains nonintuitive to users from the diverse fields of technique application. In this work, we demonstrate that the SM signal can be analyzed to reconstruct explicit physical measures of internal structure within label-free, weakly scattering samples: characteristic length scale and the amplitude of spatial refractive-index (RI) fluctuations. We present and validate the reconstruction algorithm via finite-difference time-domain solutions of Maxwell's equations on an example of exponential spatial correlation of RI. We apply the validated algorithm to experimentally measure structural properties within isolated cells from two genetic variants of HT29 colon cancer cell line as well as within a prostate tissue biopsy section. The presented methodology can lead to the development of novel biophotonics techniques that create two-dimensional maps of explicit structural properties within biomaterials: the characteristic size of macromolecular complexes and the variance of local mass density.

  9. Evaluation of Raman spectroscopy for the trace analysis of biomolecules for Mars exobiology

    NASA Astrophysics Data System (ADS)

    Jehlicka, Jan; Edwards, Howell G. M.; Vitek, Petr; Culka, Adam

    2010-05-01

    Raman spectroscopy is an ideal technique for the identification of biomolecules and minerals for astrobiological applications. Raman spectroscopic instrumentation has been shown to be potentially valuable for the in-situ detection of spectral biomarkers originating from rock samples containing remnants of terrestrial endolithic colonisation. Within the future payloads designed by ESA and NASA for several missions focussing on life detection on Mars, Raman spectroscopy has been proposed as an important non-destructive analytical tool for the in-situ identification of organic compounds relevant to life detection on planetary and moon surfaces or near sub-surfaces. Portable Raman systems equipped with 785 nm lasers permit the detection of pure organic minerals, aminoacids, carboxylic acids, as well as NH-containing compounds outdoors at -20°C and at an altitude of 3300 m. A potential limitation for the use of Raman spectroscopic techniques is the detection of very low amounts of biomolecules in rock matrices. The detection of beta-carotene and aminoacids has been achieved in the field using a portable Raman system in admixture with crystalline powders of sulphates and halite. Relatively low detection limits less than 1 % for detecting beta-carotene, aminoacids using a portable Raman system were obtained analysing traces of these compounds in crystalline powders of sulphates and halite. Laboratory systems permit the detection of these biomolecules at even lower concentrations at sub-ppm level of the order of 0.1 to 1 mg kg-1. The comparative evaluation of laboratory versus field measurements permits the identification of critical issues for future field applications and directs attention to the improvements needed in the instrumentation . A comparison between systems using different laser excitation wavelengths shows excellent results for 785 nm laser excitation . The results of this study will inform the acquisition parameters necessary for the deployment of robotic miniaturised Raman spectrosocpic instrumentation intended for the detection of spectral signatures of extant or relict life on Mars.

  10. Exploring Raman spectroscopy for the evaluation of glaucomatous retinal changes

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Grozdanic, Sinisa D.; Harper, Matthew M.; Hamouche, Nicolas; Kecova, Helga; Lazic, Tatjana; Yu, Chenxu

    2011-10-01

    Glaucoma is a chronic neurodegenerative disease characterized by apoptosis of retinal ganglion cells and subsequent loss of visual function. Early detection of glaucoma is critical for the prevention of permanent structural damage and irreversible vision loss. Raman spectroscopy is a technique that provides rapid biochemical characterization of tissues in a nondestructive and noninvasive fashion. In this study, we explored the potential of using Raman spectroscopy for detection of glaucomatous changes in vitro. Raman spectroscopic imaging was conducted on retinal tissues of dogs with hereditary glaucoma and healthy control dogs. The Raman spectra were subjected to multivariate discriminant analysis with a support vector machine algorithm, and a classification model was developed to differentiate disease tissues versus healthy tissues. Spectroscopic analysis of 105 retinal ganglion cells (RGCs) from glaucomatous dogs and 267 RGCs from healthy dogs revealed spectroscopic markers that differentiated glaucomatous specimens from healthy controls. Furthermore, the multivariate discriminant model differentiated healthy samples and glaucomatous samples with good accuracy [healthy 89.5% and glaucomatous 97.6% for the same breed (Basset Hounds); and healthy 85.0% and glaucomatous 85.5% for different breeds (Beagles versus Basset Hounds)]. Raman spectroscopic screening can be used for in vitro detection of glaucomatous changes in retinal tissue with a high specificity.

  11. Exploring Raman spectroscopy for the evaluation of glaucomatous retinal changes.

    PubMed

    Wang, Qi; Grozdanic, Sinisa D; Harper, Matthew M; Hamouche, Nicolas; Kecova, Helga; Lazic, Tatjana; Yu, Chenxu

    2011-10-01

    Glaucoma is a chronic neurodegenerative disease characterized by apoptosis of retinal ganglion cells and subsequent loss of visual function. Early detection of glaucoma is critical for the prevention of permanent structural damage and irreversible vision loss. Raman spectroscopy is a technique that provides rapid biochemical characterization of tissues in a nondestructive and noninvasive fashion. In this study, we explored the potential of using Raman spectroscopy for detection of glaucomatous changes in vitro. Raman spectroscopic imaging was conducted on retinal tissues of dogs with hereditary glaucoma and healthy control dogs. The Raman spectra were subjected to multivariate discriminant analysis with a support vector machine algorithm, and a classification model was developed to differentiate disease tissues versus healthy tissues. Spectroscopic analysis of 105 retinal ganglion cells (RGCs) from glaucomatous dogs and 267 RGCs from healthy dogs revealed spectroscopic markers that differentiated glaucomatous specimens from healthy controls. Furthermore, the multivariate discriminant model differentiated healthy samples and glaucomatous samples with good accuracy [healthy 89.5% and glaucomatous 97.6% for the same breed (Basset Hounds); and healthy 85.0% and glaucomatous 85.5% for different breeds (Beagles versus Basset Hounds)]. Raman spectroscopic screening can be used for in vitro detection of glaucomatous changes in retinal tissue with a high specificity.

  12. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  13. On the reliable and flexible solution of practical subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.

  14. Infrared fiber coupled acousto-optic tunable filter spectrometer

    NASA Technical Reports Server (NTRS)

    Levin, K. H.; Kindler, E.; Ko, T.; Lee, F.; Tran, D. C.; Tapphorn, R. M.

    1990-01-01

    A spectrometer design is introduced which combines an acoustooptic tunable filter (AOTF) and IR-transmitting flouride-glass fibers. The AOTF crystal is fabricated from TeO2 and permits random access to any wavelength in less than 50 microseconds, and the resulting spectrometer is tested for the remote analysis of gases and hydrocarbons. The AOTF spectrometer, when operated with a high-speed frequency synthesizer and optimized algorithms, permits accurate high-speed spectroscopy in the mid-IR spectral region.

  15. Matrix form for the instrument line shape of Fourier-transform spectrometers yielding a fast integration algorithm to theoretical spectra.

    PubMed

    Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre

    2006-01-20

    The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.

  16. Case study: in vivo stress diagnostics by spectroscopic determination of the cutaneous carotenoid antioxidant concentration in midwives depending on shift work

    NASA Astrophysics Data System (ADS)

    Maeter, H.; Briese, V.; Gerber, B.; Darvin, M. E.; Lademann, J.; Olbertz, D. M.

    2013-10-01

    Laser spectroscopic methods, for instance resonance Raman spectroscopy and reflectance spectroscopy, permit us for the first time to investigate the antioxidative status in human skin non-invasively by measurement of carotenoid concentration. The individual antioxidant concentration of the human skin is determined by the nutritional habits, on the one hand, and by stressors, such as shift work, on the other. Due to the disturbance of the circadian rhythm and melatonin secretion, shift work is associated with, inter alia, insomnia and gastrointestinal disorders. The study at hand was the first to determine the cutaneous antioxidant concentration of midwives using reflectance spectroscopy and to relate the results to shift work. Seven midwives took part in the study. An LED-based compact scanner system was used for non-invasive measurements of carotenoids in human skin. The measuring principle is based on reflection spectroscopy. The study at hand suggests that the cutaneous antioxidative status may be adversely affected by shift work. Despite numerous international strategies of programmes available which invite people to eat more healthily, there are only a few measures aiming at stress reduction and management. In this field the use of reflectance spectroscopic investigation methods could play an essential role in the future.

  17. THE NASA AMES PAH IR SPECTROSCOPIC DATABASE VERSION 2.00: UPDATED CONTENT, WEB SITE, AND ON(OFF)LINE TOOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boersma, C.; Mattioda, A. L.; Allamandola, L. J.

    A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysismore » called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.« less

  18. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  19. Pressure algorithm for elliptic flow calculations with the PDF method

    NASA Technical Reports Server (NTRS)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  20. The NOAA-NASA CZCS Reanalysis Effort

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)

    2001-01-01

    Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  1. A circular median filter approach for resolving directional ambiguities in wind fields retrieved from spaceborne scatterometer data

    NASA Technical Reports Server (NTRS)

    Schultz, Howard

    1990-01-01

    The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.

  2. Personalized Medicine and Opioid Analgesic Prescribing for Chronic Pain: Opportunities and Challenges

    PubMed Central

    Bruehl, Stephen; Apkarian, A. Vania; Ballantyne, Jane C.; Berger, Ann; Borsook, David; Chen, Wen G.; Farrar, John T.; Haythornthwaite, Jennifer A.; Horn, Susan D.; Iadarola, Michael J.; Inturrisi, Charles E.; Lao, Lixing; Mackey, Sean; Mao, Jianren; Sawczuk, Andrea; Uhl, George R.; Witter, James; Woolf, Clifford J.; Zubieta, Jon-Kar; Lin, Yu

    2013-01-01

    Use of opioid analgesics for pain management has increased dramatically over the past decade, with corresponding increases in negative sequelae including overdose and death. There is currently no well-validated objective means of accurately identifying patients likely to experience good analgesia with low side effects and abuse risk prior to initiating opioid therapy. This paper discusses the concept of data-based personalized prescribing of opioid analgesics as a means to achieve this goal. Strengths, weaknesses, and potential synergism of traditional randomized placebo-controlled trial (RCT) and practice-based evidence (PBE) methodologies as means to acquire the clinical data necessary to develop validated personalized analgesic prescribing algorithms are overviewed. Several predictive factors that might be incorporated into such algorithms are briefly discussed, including genetic factors, differences in brain structure and function, differences in neurotransmitter pathways, and patient phenotypic variables such as negative affect, sex, and pain sensitivity. Currently available research is insufficient to inform development of quantitative analgesic prescribing algorithms. However, responder subtype analyses made practical by the large numbers of chronic pain patients in proposed collaborative PBE pain registries, in conjunction with follow-up validation RCTs, may eventually permit development of clinically useful analgesic prescribing algorithms. Perspective Current research is insufficient to base opioid analgesic prescribing on patient characteristics. Collaborative PBE studies in large, diverse pain patient samples in conjunction with follow-up RCTs may permit development of quantitative analgesic prescribing algorithms which could optimize opioid analgesic effectiveness, and mitigate risks of opioid-related abuse and mortality. PMID:23374939

  3. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  4. Multiheterodyne spectroscopy using interband cascade lasers

    NASA Astrophysics Data System (ADS)

    Sterczewski, Lukasz A.; Westberg, Jonas; Patrick, Charles Link; Kim, Chul Soo; Kim, Mijin; Canedy, Chadwick L.; Bewley, William W.; Merritt, Charles D.; Vurgaftman, Igor; Meyer, Jerry R.; Wysocki, Gerard

    2018-01-01

    While midinfrared radiation can be used to identify and quantify numerous chemical species, contemporary broadband midinfrared spectroscopic systems are often hindered by large footprints, moving parts, and high power consumption. In this work, we demonstrate multiheterodyne spectroscopy (MHS) using interband cascade lasers, which combines broadband spectral coverage with high spectral resolution and energy-efficient operation. The lasers generate up to 30 mW of continuous-wave optical power while consuming <0.5 W of electrical power. A computational phase and timing correction algorithm is used to obtain kHz linewidths of the multiheterodyne beat notes and up to 30 dB improvement in signal-to-noise ratio. The versatility of the multiheterodyne technique is demonstrated by performing both rapidly swept absorption and dispersion spectroscopic assessments of low-pressure ethylene (C2H4) acquired by extracting a single beat note from the multiheterodyne signal, as well as broadband MHS of methane (CH4) acquired with all available beat notes with microsecond temporal resolution and an instantaneous optical bandwidth of ˜240 GHz. The technology shows excellent potential for portable and high-resolution solid-state spectroscopic chemical sensors operating in the midinfrared.

  5. Simulation of an enhanced TCAS 2 system in operation

    NASA Technical Reports Server (NTRS)

    Rojas, R. G.; Law, P.; Burnside, W. D.

    1987-01-01

    Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.

  6. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Deutsch, L. J.; Reed, I. S.

    1987-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  7. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, Howard M.; Reed, Irving S.

    1988-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  8. Design of a Mechanical-Tunable Filter Spectrometer for Noninvasive Glucose Measurement

    NASA Astrophysics Data System (ADS)

    Saptari, Vidi; Youcef-Toumi, Kamal

    2004-05-01

    The development of an accurate and reliable noninvasive near-infrared (NIR) glucose sensor hinges on the success in addressing the sensitivity and the specificity problems associated with the weak glucose signals and the overlapping NIR spectra. Spectroscopic hardware parameters most relevant to noninvasive blood glucose measurement are discussed, which include the optical throughput, integration time, spectral range, and the spectral resolution. We propose a unique spectroscopic system using a continuously rotating interference filter, which produces a signal-to-noise ratio of the order of 10^5 and is estimated to be the minimum required for successful in vivo glucose sensing. Using a classical least-squares algorithm and a spectral range between 2180 and 2312 nm, we extracted clinically relevant glucose concentrations in multicomponent solutions containing bovine serum albumin, triacetin, lactate, and urea.

  9. Carbon nuclear magnetic resonance spectroscopic fingerprinting of commercial gasoline: pattern-recognition analyses for screening quality control purposes.

    PubMed

    Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de

    2010-06-30

    In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Three-dimensional unstructured grid generation via incremental insertion and local optimization

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Wiltberger, N. Lyn; Gandhi, Amar S.

    1992-01-01

    Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details.

  11. Delaunay based algorithm for finding polygonal voids in planar point sets

    NASA Astrophysics Data System (ADS)

    Alonso, R.; Ojeda, J.; Hitschfeld, N.; Hervías, C.; Campusano, L. E.

    2018-01-01

    This paper presents a new algorithm to find under-dense regions called voids inside a 2D point set. The algorithm starts from terminal-edges (local longest-edges) in a Delaunay triangulation and builds the largest possible low density terminal-edge regions around them. A terminal-edge region can represent either an entire void or part of a void (subvoid). Using artificial data sets, the case of voids that are detected as several adjacent subvoids is analyzed and four subvoid joining criteria are proposed and evaluated. Since this work is inspired on searches of a more robust, effective and efficient algorithm to find 3D cosmological voids the evaluation of the joining criteria considers this context. However, the design of the algorithm permits its adaption to the requirements of any similar application.

  12. Finite-element time-domain algorithms for modeling linear Debye and Lorentz dielectric dispersions at low frequencies.

    PubMed

    Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen

    2003-09-01

    We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.

  13. Spectroscopic characterization of low dose rate brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Beach, Stephen M.

    The low dose rate (LDR) brachytherapy seeds employed in permanent radioactive-source implant treatments usually use one of two radionuclides, 125I or 103Pd. The theoretically expected source spectroscopic output from these sources can be obtained via Monte Carlo calculation based upon seed dimensions and materials as well as the bare-source photon emissions for that specific radionuclide. However the discrepancies resulting from inconsistent manufacturing of sources in comparison to each other within model groups and simplified Monte Carlo calculational geometries ultimately result in undesirably large uncertainties in the Monte Carlo calculated values. This dissertation describes experimentally attained spectroscopic outputs of the clinically used brachytherapy sources in air and in liquid water. Such knowledge can then be applied to characterize these sources by a more fundamental and metro logically-pure classification, that of energy-based dosimetry. The spectroscopic results contained within this dissertation can be utilized in the verification and benchmarking of Monte Carlo calculational models of these brachytherapy sources. This body of work was undertaken to establish a usable spectroscopy system and analysis methods for the meaningful study of LDR brachytherapy seeds. The development of a correction algorithm and the analysis of the resultant spectroscopic measurements are presented. The characterization of the spectrometer and the subsequent deconvolution of the measured spectrum to obtain the true spectrum free of any perturbations caused by the spectrometer itself is an important contribution of this work. The approach of spectroscopic deconvolution that was applied in this work is derived in detail and it is applied to the physical measurements. In addition, the spectroscopically based analogs to the LDR dosimetry parameters that are currently employed are detailed, as well as the development of the theory and measurement methods to arrive at these analogs. Several dosimetrically-relevant water-equivalent plastics were also investigated for their transmission properties within a liquid water environment, as well as in air. The framework for the accurate spectrometry of LDR sources is established as a result of this dissertation work. In addition to the measurement and analysis methods, this work presents the basic measured spectroscopic characteristics of each LDR seed currently in use in the clinic today.

  14. Biosignatures and Planetary Properties to be Investigated by the TPF Mission

    NASA Technical Reports Server (NTRS)

    DesMarais, David J.; Harwit, Martin; Jucks, Kenneth; Kasting, James F.; Woolf, Neville; Lin, Douglas; Seager, Sara; Schneider, Jean; Traub, Wesley; Lunine, Jonathan I.

    2002-01-01

    A major goal of Terrestrial Planet Finder (TPF) mission is to provide data to the biologists and atmospheric chemists who will be best able to evaluate the observations for evidence of life. This white paper reviews the benefits and challenges associated with remote spectroscopic observations of planets; it recommends wavelength ranges and spectral features; and it provides algorithms for detection of these features.

  15. Ultrafast spectroscopy reveals subnanosecond peptide conformational dynamics and validates molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Spörlein, Sebastian; Carstens, Heiko; Satzger, Helmut; Renner, Christian; Behrendt, Raymond; Moroder, Luis; Tavan, Paul; Zinth, Wolfgang; Wachtveitl, Josef

    2002-06-01

    Femtosecond time-resolved spectroscopy on model peptides with built-in light switches combined with computer simulation of light-triggered motions offers an attractive integrated approach toward the understanding of peptide conformational dynamics. It was applied to monitor the light-induced relaxation dynamics occurring on subnanosecond time scales in a peptide that was backbone-cyclized with an azobenzene derivative as optical switch and spectroscopic probe. The femtosecond spectra permit the clear distinguishing and characterization of the subpicosecond photoisomerization of the chromophore, the subsequent dissipation of vibrational energy, and the subnanosecond conformational relaxation of the peptide. The photochemical cis/trans-isomerization of the chromophore and the resulting peptide relaxations have been simulated with molecular dynamics calculations. The calculated reaction kinetics, as monitored by the energy content of the peptide, were found to match the spectroscopic data. Thus we verify that all-atom molecular dynamics simulations can quantitatively describe the subnanosecond conformational dynamics of peptides, strengthening confidence in corresponding predictions for longer time scales.

  16. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  17. Toward variational assimilation of SARAL/Altika altimeter data in a North Atlantic circulation model at eddy-permitting resolution: assessment of a NEMO-based 4D-VAR system

    NASA Astrophysics Data System (ADS)

    Bouttier, Pierre-Antoine; Brankart, Jean-Michel; Candille, Guillem; Vidard, Arthur; Blayo, Eric; Verron, Jacques; Brasseur, Pierre

    2015-04-01

    In this project, the response of a variational data assimilation system based on NEMO and its linear tangent and adjoint model is investigated using a 4DVAR algorithm into a North-Atlantic model at eddy-permitting resolution. The assimilated data consist of Jason-2 and SARAL/AltiKA dataset collected during the 2013-2014 period. The main objective is to explore the robustness of the 4DVAR algorithm in the context of a realistic turbulent oceanic circulation at mid-latitude constrained by multi-satellite altimetry missions. This work relies on two previous studies. First, a study with similar objectives was performed based on academic double-gyre turbulent model and synthetic SARAL/AltiKA data, using the same DA experimental framework. Its main goal was to investigate the impact of turbulence on variational DA methods performance. The comparison with this previous work will bring to light the methodological and physical issues encountered by variational DA algorithms in a realistic context at similar, eddy-permitting spatial resolution. We also have demonstrated how a dataset mimicking future SWOT observations improves 4DVAR incremental performances at eddy-permitting resolution. Then, in the context of the OSTST and FP7 SANGOMA projects, an ensemble DA experiment based on the same model and observational datasets has been realized (see poster by Brasseur et al.). This work offers the opportunity to compare efficiency, pros and cons of both DA methods in the context of KA-band altimetric data, at spatial resolution commonly used today for research and operational applications. In this poster we will present the validation plan proposed to evaluate the skill of variational experiment vs. ensemble assimilation experiments covering the same period using independent observations (e.g. from Cryosat-2 mission).

  18. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  19. A Genetic Algorithm Approach to Door Assignment in Breakbulk Terminals

    DOT National Transportation Integrated Search

    2001-08-23

    Commercial vehicle regulation and enforcement is a necessary and important function of state governments. Through regulation, states promote highway safety, ensure that motor carriers have the proper licenses and operating permits, and collect taxes ...

  20. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  1. Construction of hydrodynamic bead models from high-resolution X-ray crystallographic or nuclear magnetic resonance data.

    PubMed Central

    Byron, O

    1997-01-01

    Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627

  2. A Markov random field based approach to the identification of meat and bone meal in feed by near-infrared spectroscopic imaging.

    PubMed

    Jiang, Xunpeng; Yang, Zengling; Han, Lujia

    2014-07-01

    Contaminated meat and bone meal (MBM) in animal feedstuff has been the source of bovine spongiform encephalopathy (BSE) disease in cattle, leading to a ban in its use, so methods for its detection are essential. In this study, five pure feed and five pure MBM samples were used to prepare two sets of sample arrangements: set A for investigating the discrimination of individual feed/MBM particles and set B for larger numbers of overlapping particles. The two sets were used to test a Markov random field (MRF)-based approach. A Fourier transform infrared (FT-IR) imaging system was used for data acquisition. The spatial resolution of the near-infrared (NIR) spectroscopic image was 25 μm × 25 μm. Each spectrum was the average of 16 scans across the wavenumber range 7,000-4,000 cm(-1), at intervals of 8 cm(-1). This study introduces an innovative approach to analyzing NIR spectroscopic images: an MRF-based approach has been developed using the iterated conditional mode (ICM) algorithm, integrating initial labeling-derived results from support vector machine discriminant analysis (SVMDA) and observation data derived from the results of principal component analysis (PCA). The results showed that MBM covered by feed could be successfully recognized with an overall accuracy of 86.59% and a Kappa coefficient of 0.68. Compared with conventional methods, the MRF-based approach is capable of extracting spectral information combined with spatial information from NIR spectroscopic images. This new approach enhances the identification of MBM using NIR spectroscopic imaging.

  3. The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: effect of smoothing of density field on reconstruction and anisotropic BAO analysis

    NASA Astrophysics Data System (ADS)

    Vargas-Magaña, Mariana; Ho, Shirley; Fromenteau, Sebastien.; Cuesta, Antonio. J.

    2017-05-01

    The reconstruction algorithm introduced by Eisenstein et al., which is widely used in clustering analysis, is based on the inference of the first-order Lagrangian displacement field from the Gaussian smoothed galaxy density field in redshift space. The smoothing scale applied to the density field affects the inferred displacement field that is used to move the galaxies, and partially erases the non-linear evolution of the density field. In this article, we explore this crucial step in the reconstruction algorithm. We study the performance of the reconstruction technique using two metrics: first, we study the performance using the anisotropic clustering, extending previous studies focused on isotropic clustering; secondly, we study its effect on the displacement field. We find that smoothing has a strong effect in the quadrupole of the correlation function and affects the accuracy and precision with which we can measure DA(z) and H(z). We find that the optimal smoothing scale to use in the reconstruction algorithm applied to Baryonic Oscillations Spectroscopic Survey-Constant (stellar) MASS (CMASS) is between 5 and 10 h-1 Mpc. Varying from the `usual' 15-5 h-1 Mpc shows ˜0.3 per cent variations in DA(z) and ˜0.4 per cent H(z) and uncertainties are also reduced by 40 per cent and 30 per cent, respectively. We also find that the accuracy of velocity field reconstruction depends strongly on the smoothing scale used for the density field. We measure the bias and uncertainties associated with different choices of smoothing length.

  4. VHP - An environment for the remote visualization of heuristic processes

    NASA Technical Reports Server (NTRS)

    Crawford, Stuart L.; Leiner, Barry M.

    1991-01-01

    A software system called VHP is introduced which permits the visualization of heuristic algorithms on both resident and remote hardware platforms. The VHP is based on the DCF tool for interprocess communication and is applicable to remote algorithms which can be on different types of hardware and in languages other than VHP. The VHP system is of particular interest to systems in which the visualization of remote processes is required such as robotics for telescience applications.

  5. A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].

  6. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  7. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)

    1995-01-01

    High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.

  8. NOAA-NASA Coastal Zone Color Scanner reanalysis effort.

    PubMed

    Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W

    2002-03-20

    Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  9. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  10. Plasma spectroscopy analysis technique based on optimization algorithms and spectral synthesis for arc-welding quality assurance.

    PubMed

    Mirapeix, J; Cobo, A; González, D A; López-Higuera, J M

    2007-02-19

    A new plasma spectroscopy analysis technique based on the generation of synthetic spectra by means of optimization processes is presented in this paper. The technique has been developed for its application in arc-welding quality assurance. The new approach has been checked through several experimental tests, yielding results in reasonably good agreement with the ones offered by the traditional spectroscopic analysis technique.

  11. Deriving health utilities from the MacNew Heart Disease Quality of Life Questionnaire.

    PubMed

    Chen, Gang; McKie, John; Khan, Munir A; Richardson, Jeff R

    2015-10-01

    Quality of life is included in the economic evaluation of health services by measuring the preference for health states, i.e. health state utilities. However, most intervention studies include a disease-specific, not a utility, instrument. Consequently, there has been increasing use of statistical mapping algorithms which permit utilities to be estimated from a disease-specific instrument. The present paper provides such algorithms between the MacNew Heart Disease Quality of Life Questionnaire (MacNew) instrument and six multi-attribute utility (MAU) instruments, the Euroqol (EQ-5D), the Short Form 6D (SF-6D), the Health Utilities Index (HUI) 3, the Quality of Wellbeing (QWB), the 15D (15 Dimension) and the Assessment of Quality of Life (AQoL-8D). Heart disease patients and members of the healthy public were recruited from six countries. Non-parametric rank tests were used to compare subgroup utilities and MacNew scores. Mapping algorithms were estimated using three separate statistical techniques. Mapping algorithms achieved a high degree of precision. Based on the mean absolute error and the intra class correlation the preferred mapping is MacNew into SF-6D or 15D. Using the R squared statistic the preferred mapping is MacNew into AQoL-8D. The algorithms reported in this paper enable MacNew data to be mapped into utilities predicted from any of six instruments. This permits studies which have included the MacNew to be used in cost utility analyses which, in turn, allows the comparison of services with interventions across the health system. © The European Society of Cardiology 2014.

  12. A single chip VLSI Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.

    1986-01-01

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.

  13. Parallel detecting, spectroscopic ellipsometers/polarimeters

    DOEpatents

    Furtak, Thomas E.

    2002-01-01

    The parallel detecting spectroscopic ellipsometer/polarimeter sensor has no moving parts and operates in real-time for in-situ monitoring of the thin film surface properties of a sample within a processing chamber. It includes a multi-spectral source of radiation for producing a collimated beam of radiation directed towards the surface of the sample through a polarizer. The thus polarized collimated beam of radiation impacts and is reflected from the surface of the sample, thereby changing its polarization state due to the intrinsic material properties of the sample. The light reflected from the sample is separated into four separate polarized filtered beams, each having individual spectral intensities. Data about said four individual spectral intensities is collected within the processing chamber, and is transmitted into one or more spectrometers. The data of all four individual spectral intensities is then analyzed using transformation algorithms, in real-time.

  14. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination.

    PubMed

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  15. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination

    NASA Astrophysics Data System (ADS)

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  16. Real time workload classification from an ambulatory wireless EEG system using hybrid EEG electrodes.

    PubMed

    Matthews, R; Turner, P J; McDonald, N J; Ermolaev, K; Manus, T; Shelby, R A; Steindorf, M

    2008-01-01

    This paper describes a compact, lightweight and ultra-low power ambulatory wireless EEG system based upon QUASAR's innovative noninvasive bioelectric sensor technologies. The sensors operate through hair without skin preparation or conductive gels. Mechanical isolation built into the harness permits the recording of high quality EEG data during ambulation. Advanced algorithms developed for this system permit real time classification of workload during subject motion. Measurements made using the EEG system during ambulation are presented, including results for real time classification of subject workload.

  17. Knowledge requirements for automated inference of medical textbook markup.

    PubMed Central

    Berrios, D. C.; Kehler, A.; Fagan, L. M.

    1999-01-01

    Indexing medical text in journals or textbooks requires a tremendous amount of resources. We tested two algorithms for automatically indexing nouns, noun-modifiers, and noun phrases, and inferring selected binary relations between UMLS concepts in a textbook of infectious disease. Sixty-six percent of nouns and noun-modifiers and 81% of noun phrases were correctly matched to UMLS concepts. Semantic relations were identified with 100% specificity and 94% sensitivity. For some medical sub-domains, these algorithms could permit expeditious generation of more complex indexing. PMID:10566445

  18. THE SDSS-III BARYON OSCILLATION SPECTROSCOPIC SURVEY: QUASAR TARGET SELECTION FOR DATA RELEASE NINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Nicholas P.; Kirkpatrick, Jessica A.; Carithers, William C.

    2012-03-01

    The SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), a five-year spectroscopic survey of 10,000 deg{sup 2}, achieved first light in late 2009. One of the key goals of BOSS is to measure the signature of baryon acoustic oscillations (BAOs) in the distribution of Ly{alpha} absorption from the spectra of a sample of {approx}150,000 z > 2.2 quasars. Along with measuring the angular diameter distance at z Almost-Equal-To 2.5, BOSS will provide the first direct measurement of the expansion rate of the universe at z > 2. One of the biggest challenges in achieving this goal is an efficient target selection algorithmmore » for quasars in the redshift range 2.2 < z < 3.5, where their colors tend to overlap those of the far more numerous stars. During the first year of the BOSS survey, quasar target selection (QTS) methods were developed and tested to meet the requirement of delivering at least 15 quasars deg{sup -2} in this redshift range, with a goal of 20 out of 40 targets deg{sup -2} allocated to the quasar survey. To achieve these surface densities, the magnitude limit of the quasar targets was set at g {<=} 22.0 or r {<=} 21.85. While detection of the BAO signature in the distribution of Ly{alpha} absorption in quasar spectra does not require a uniform target selection algorithm, many other astrophysical studies do. We have therefore defined a uniformly selected subsample of 20 targets deg{sup -2}, for which the selection efficiency is just over 50% ({approx}10 z > 2.20 quasars deg{sup -2}). This 'CORE' subsample will be fixed for Years Two through Five of the survey. For the remaining 20 targets deg{sup -2}, we will continue to develop improved selection techniques, including the use of additional data sets beyond the Sloan Digital Sky Survey (SDSS) imaging data. In this paper, we describe the evolution and implementation of the BOSS QTS algorithms during the first two years of BOSS operations (through 2011 July), in support of the science investigations based on these data, and we analyze the spectra obtained during the first year. During this year, 11,263 new z > 2.20 quasars were spectroscopically confirmed by BOSS, roughly double the number of previously known quasars with z > 2.20. Our current algorithms select an average of 15 z > 2.20 quasars deg{sup -2} from 40 targets deg{sup -2} using single-epoch SDSS imaging. Multi-epoch optical data and data at other wavelengths can further improve the efficiency and completeness of BOSS QTS.« less

  19. Design and use of a servo-controlled high pressure window bomb in spectroscopic studies of solid propellant combustion

    NASA Technical Reports Server (NTRS)

    Goetz, F.; Mann, D. M.

    1980-01-01

    The feasibility of using a high pressure window bomb as a laboratory scale model of actual motor conditions. The design and operation of a modified high pressure window bomb is discussed. An optical servocontrol mechanism has been designed to hold the burning surface of a propellant strand at a fixed position within the bomb chamber. This mechanism permits the recording of visible and infrared emission spectra from various propellants. Preliminary visible emission spectra of a nonmetalized and metalized propellant are compared with spectra recorded using the modified bomb.

  20. Recent advances in multidimensional ultrafast spectroscopy

    NASA Astrophysics Data System (ADS)

    Oliver, Thomas A. A.

    2018-01-01

    Multidimensional ultrafast spectroscopies are one of the premier tools to investigate condensed phase dynamics of biological, chemical and functional nanomaterial systems. As they reach maturity, the variety of frequency domains that can be explored has vastly increased, with experimental techniques capable of correlating excitation and emission frequencies from the terahertz through to the ultraviolet. Some of the most recent innovations also include extreme cross-peak spectroscopies that directly correlate the dynamics of electronic and vibrational states. This review article summarizes the key technological advances that have permitted these recent advances, and the insights gained from new multidimensional spectroscopic probes.

  1. Recent advances in multidimensional ultrafast spectroscopy

    PubMed Central

    2018-01-01

    Multidimensional ultrafast spectroscopies are one of the premier tools to investigate condensed phase dynamics of biological, chemical and functional nanomaterial systems. As they reach maturity, the variety of frequency domains that can be explored has vastly increased, with experimental techniques capable of correlating excitation and emission frequencies from the terahertz through to the ultraviolet. Some of the most recent innovations also include extreme cross-peak spectroscopies that directly correlate the dynamics of electronic and vibrational states. This review article summarizes the key technological advances that have permitted these recent advances, and the insights gained from new multidimensional spectroscopic probes. PMID:29410844

  2. Spectroscopic photon localization microscopy: breaking the resolution limit of single molecule localization microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Biqin; Almassalha, Luay Matthew; Urban, Ben E.; Nguyen, The-Quyen; Khuon, Satya; Chew, Teng-Leong; Backman, Vadim; Sun, Cheng; Zhang, Hao F.

    2017-02-01

    Distinguishing minute differences in spectroscopic signatures is crucial for revealing the fluorescence heterogeneity among fluorophores to achieve a high molecular specificity. Here we report spectroscopic photon localization microscopy (SPLM), a newly developed far-field spectroscopic imaging technique, to achieve nanoscopic resolution based on the principle of single-molecule localization microscopy while simultaneously uncovering the inherent molecular spectroscopic information associated with each stochastic event (Dong et al., Nature Communications 2016, in press). In SPLM, by using a slit-less monochromator, both the zero-order and the first-order diffractions from a grating were recorded simultaneously by an electron multiplying charge-coupled device to reveal the spatial distribution and the associated emission spectra of individual stochastic radiation events, respectively. As a result, the origins of photon emissions from different molecules can be identified according to their spectral differences with sub-nm spectral resolution, even when the molecules are within close proximity. With the newly developed algorithms including background subtraction and spectral overlap unmixing, we established and tested a method which can significantly extend the fundamental spatial resolution limit of single molecule localization microscopy by molecular discrimination through spectral regression. Taking advantage of this unique capability, we demonstrated improvement in spatial resolution of PALM/STORM up to ten fold with selected fluorophores. This technique can be readily adopted by other research groups to greatly enhance the optical resolution of single molecule localization microscopy without the need to modify their existing staining methods and protocols. This new resolving capability can potentially provide new insights into biological phenomena and enable significant research progress to be made in the life sciences.

  3. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  4. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  5. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  6. Automated detection of sperm whale sounds as a function of abrupt changes in sound intensity

    NASA Astrophysics Data System (ADS)

    Walker, Christopher D.; Rayborn, Grayson H.; Brack, Benjamin A.; Kuczaj, Stan A.; Paulos, Robin L.

    2003-04-01

    An algorithm designed to detect abrupt changes in sound intensity was developed and used to identify and count sperm whale vocalizations and to measure boat noise. The algorithm is a MATLAB routine that counts the number of occurrences for which the change in intensity level exceeds a threshold. The algorithm also permits the setting of a ``dead time'' interval to prevent the counting of multiple pulses within a single sperm whale click. This algorithm was used to analyze digitally sampled recordings of ambient noise obtained from the Gulf of Mexico using near bottom mounted EARS buoys deployed as part of the Littoral Acoustic Demonstration Center experiment. Because the background in these data varied slowly, the result of the application of the algorithm was automated detection of sperm whale clicks and creaks with results that agreed well with those obtained by trained human listeners. [Research supported by ONR.

  7. Identifying High-redshift Gamma-Ray Bursts with RATIR

    NASA Astrophysics Data System (ADS)

    Littlejohns, O. M.; Butler, N. R.; Cucchiara, A.; Watson, A. M.; Kutyrev, A. S.; Lee, W. H.; Richer, M. G.; Klein, C. R.; Fox, O. D.; Prochaska, J. X.; Bloom, J. S.; Troja, E.; Ramirez-Ruiz, E.; de Diego, J. A.; Georgiev, L.; González, J.; Román-Zúñiga, C. G.; Gehrels, N.; Moseley, H.

    2014-07-01

    We present a template-fitting algorithm for determining photometric redshifts, z phot, of candidate high-redshift gamma-ray bursts (GRBs). Using afterglow photometry, obtained by the Reionization and Transients InfraRed (RATIR) camera, this algorithm accounts for the intrinsic GRB afterglow spectral energy distribution, host dust extinction, and the effect of neutral hydrogen (local and cosmological) along the line of sight. We present the results obtained by this algorithm and the RATIR photometry of GRB 130606A, finding a range of best-fit solutions, 5.6 < z phot < 6.0, for models of several host dust extinction laws (none, the Milky Way, Large Magellanic Clouds, and Small Magellanic Clouds), consistent with spectroscopic measurements of the redshift of this GRB. Using simulated RATIR photometry, we find that our algorithm provides precise measures of z phot in the ranges of 4 < z phot <~ 8 and 9 < z phot < 10 and can robustly determine when z phot > 4. Further testing highlights the required caution in cases of highly dust-extincted host galaxies. These tests also show that our algorithm does not erroneously find z phot < 4 when z sim > 4, thereby minimizing false negatives and allowing us to rapidly identify all potential high-redshift events.

  8. Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy

    PubMed Central

    McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.

    2010-01-01

    Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369

  9. Upper limits to trace constituents in Jupiter's atmosphere from an analysis of its 5 micrometer spectrum

    NASA Technical Reports Server (NTRS)

    Treffers, R. R.; Larson, H. P.; Fink, U.; Gautier, T. N.

    1978-01-01

    A high-resolution spectrum of Jupiter at 5 micrometers recorded at the Kuiper Airborne Observatory is used to determine upper limits to the column density of 19 molecules. The upper limits to the mixing ratios of SiH4, H2S, HCN, and simple hydrocarbons are discussed with respect to current models of Jupiter's atmosphere. These upper limits are compared to expectations based upon the solar abundance of the elements. This analysis permits upper limit measurements (SiH4), or actual detections (GeH4) of molecules with mixing ratios with hydrogen as low as 10 to the minus 9th power. In future observations at 5 micrometers the sensitivity of remote spectroscopic analyses should permit the study of constituents with mixing ratios as low as 10 to the minus 10th power, which would include the hydrides of such elements as Sn and As as well as numerous organic molecules.

  10. On the nature of the symbiotic binary AX Persei

    NASA Technical Reports Server (NTRS)

    Mikolajewska, Joanna; Kenyon, Scott J.

    1992-01-01

    Photometric and spectroscopic observations of the symbiotic binary AX Persei are presented. This system contains a red giant that fills its tidal lobe and transfers material into an accretion disk surrounding a low-mass main-sequence star. The stellar masses - 1 solar mass for the red giant and about 0.4 solar mass for the companion - suggest AX Per is poised to enter a common envelope phase of evolution. The disk luminosity increases from L(disk) about 100 solar luminosity in quiescence to L(disk) about 5700 solar luminosity in outburst for a distance of d = 2.5 kpc. Except for visual maximum, high ionization permitted emission lines - such as He II - imply an EUV luminosity comparable to the disk luminosity. High-energy photons emitted by a hot boundary layer between the disk and central star ionize a surrounding nebula to produce this permitted line emission. High ionization forbidden lines form in an extended, shock-excited region well out of the binary's orbital plane and may be associated with mass loss from the disk.

  11. Broadband external cavity quantum cascade laser based sensor for gasoline detection

    NASA Astrophysics Data System (ADS)

    Ding, Junya; He, Tianbo; Zhou, Sheng; Li, Jinsong

    2018-02-01

    A new type of tunable diode spectroscopy sensor based on an external cavity quantum cascade laser (ECQCL) and a quartz crystal tuning fork (QCTF) were used for quantitative analysis of volatile organic compounds. In this work, the sensor system had been tested on different gasoline sample analysis. For signal processing, the self-established interpolation algorithm and multiple linear regression algorithm model were used for quantitative analysis of major volatile organic compounds in gasoline samples. The results were very consistent with that of the standard spectra taken from the Pacific Northwest National Laboratory (PNNL) database. In future, The ECQCL sensor will be used for trace explosive, chemical warfare agent, and toxic industrial chemical detection and spectroscopic analysis, etc.

  12. Arc-Welding Spectroscopic Monitoring based on Feature Selection and Neural Networks.

    PubMed

    Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M

    2008-10-21

    A new spectral processing technique designed for application in the on-line detection and classification of arc-welding defects is presented in this paper. A noninvasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed in two consecutive stages. A compression algorithm is first applied to the data, allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in previous works, giving rise to an improvement in the performance of the monitoring system.

  13. How to estimate the 3D power spectrum of the Lyman-α forest

    NASA Astrophysics Data System (ADS)

    Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže

    2018-01-01

    We derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fourier transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.

  14. How to estimate the 3D power spectrum of the Lyman-α forest

    DOE PAGES

    Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže

    2018-01-02

    Here, we derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fouriermore » transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.« less

  15. How to estimate the 3D power spectrum of the Lyman-α forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže

    Here, we derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fouriermore » transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.« less

  16. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  17. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  18. The 2-degree Field Lensing Survey: design and clustering measurements

    NASA Astrophysics Data System (ADS)

    Blake, Chris; Amon, Alexandra; Childress, Michael; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Hinton, Samuel R.; Janssens, Steven; Johnson, Andrew; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; Parkinson, David; Poole, Gregory B.; Wolf, Christian

    2016-11-01

    We present the 2-degree Field Lensing Survey (2dFLenS), a new galaxy redshift survey performed at the Anglo-Australian Telescope. 2dFLenS is the first wide-area spectroscopic survey specifically targeting the area mapped by deep-imaging gravitational lensing fields, in this case the Kilo-Degree Survey. 2dFLenS obtained 70 079 redshifts in the range z < 0.9 over an area of 731 deg2, and is designed to extend the data sets available for testing gravitational physics and promote the development of relevant algorithms for joint imaging and spectroscopic analysis. The redshift sample consists first of 40 531 Luminous Red Galaxies (LRGs), which enable analyses of galaxy-galaxy lensing, redshift-space distortion, and the overlapping source redshift distribution by cross-correlation. An additional 28 269 redshifts form a magnitude-limited (r < 19.5) nearly complete subsample, allowing direct source classification and photometric-redshift calibration. In this paper, we describe the motivation, target selection, spectroscopic observations, and clustering analysis of 2dFLenS. We use power spectrum multipole measurements to fit the redshift-space distortion parameter of the LRG sample in two redshift ranges 0.15 < z < 0.43 and 0.43 < z < 0.7 as β = 0.49 ± 0.15 and β = 0.26 ± 0.09, respectively. These values are consistent with those obtained from LRGs in the Baryon Oscillation Spectroscopic Survey. 2dFLenS data products will be released via our website http://2dflens.swin.edu.au.

  19. Convolutional neural networks for vibrational spectroscopic data analysis.

    PubMed

    Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena

    2017-02-15

    In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    PubMed

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  1. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    PubMed Central

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  2. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  3. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, H.M.; Reed, I.S.

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less

  4. A generalized method for multiple robotic manipulator programming applied to vertical-up welding

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh

    1991-01-01

    The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.

  5. Type Ia supernova rate studies from the SDSS-II Supernova Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilday, Benjamin

    2008-08-01

    The author presents new measurements of the type Ia SN rate from the SDSS-II Supernova Survey. The SDSS-II Supernova Survey was carried out during the Fall months (Sept.-Nov.) of 2005-2007 and discovered ~ 500 spectroscopically confirmed SNe Ia with densely sampled (once every ~ 4 days), multi-color light curves. Additionally, the SDSS-II Supernova Survey has discovered several hundred SNe Ia candidates with well-measured light curves, but without spectroscopic confirmation of type. This total, achieved in 9 months of observing, represents ~ 15-20% of the total SNe Ia discovered worldwide since 1885. The author describes some technical details of the SNmore » Survey observations and SN search algorithms that contributed to the extremely high-yield of discovered SNe and that are important as context for the SDSS-II Supernova Survey SN Ia rate measurements.« less

  6. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy

    PubMed Central

    Sathyavathi, R.; Saha, Anushree; Soares, Jaqueline S.; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-01-01

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93–98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance. PMID:25927331

  7. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy

    NASA Astrophysics Data System (ADS)

    Sathyavathi, R.; Saha, Anushree; Soares, Jaqueline S.; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-04-01

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93-98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance.

  8. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy.

    PubMed

    Sathyavathi, R; Saha, Anushree; Soares, Jaqueline S; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-04-30

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93-98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance.

  9. The SOLAR-C Mission: Science Objectives and Current Status

    NASA Astrophysics Data System (ADS)

    Suematsu, Y.; Solar-C Working Group

    2016-04-01

    The SOLAR-C is a Japan-led international solar mission for mid-2020s designed to investigate the magnetic activities of the Sun, focusing on the study in heating and dynamical phenomena of the chromosphere and corona, and to advance algorithms for predicting short and long term solar magnetic activities. For these purposes, SOLAR-C will carry three dedicated instruments; the Solar UV-Vis-IR Telescope (SUVIT), the EUV Spectroscopic Telescope (EUVST) and the High Resolution Coronal Imager (HCI), to jointly observe the entire visible solar atmosphere with essentially the same high spatial resolution (0.1"-0.3"), performing high resolution spectroscopic measurements over all atmospheric regions and spectro-polarimetric measurements from the photosphere through the upper chromosphere. SOLAR-C will also contribute to understand the solar influence on the Sun-Earth environments with synergetic wide-field observations from ground-based and other space missions.

  10. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  11. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen

    2008-04-01

    OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.

  12. Chromatographic and spectroscopic identification and recognition of ammoniacal cochineal dyes and pigments

    NASA Astrophysics Data System (ADS)

    Chieli, A.; Sanyova, J.; Doherty, B.; Brunetti, B. G.; Miliani, C.

    2016-06-01

    In this work a combined chromatographic and spectroscopic approach is used to provide a diagnostic assessment of semi-synthetic ammoniacal cochineal through the syntheses of its dyes and lakes according to art historical recipes. Commercially introduced in the late XIX century as a dye and pigment, it was used to obtain a brilliant purplish/violet nuance which provided a more stable option over carminic acid although its evidenced use in manufacts and artworks of heritage importance have been scarcely documented. Through HPLC-DAD, it has been possible to identify 4-aminocarminic acid as the main component of ammoniacal cochineal highlighting a chemical formula analogous to acid stable carmine, a recent patented food dye. FTIR clearly distinguishes the amine group in the ammoniacal cochineal dye preparation and TLC-SERS allows for an adequate separation and spectral differentiation in its main components to be evidenced. Colloidal SERS has permitted spectral markers useful in discerning ammoniacal cochineal over carminic acid to be highlighted and discussed. Finally, the methods experimented in this study for the identification of ammoniacal cochineal have been validated on analyzing a sample of dyed wool.

  13. Infrared spectroscopic studies of myeloid leukemia (ML-1) cells at different phases of the cell cycle

    NASA Astrophysics Data System (ADS)

    Boydston-White, Susie; Diem, Max

    1999-06-01

    Advances in infrared spectroscopic methodology permit excellent infrared spectra to be collected from objects as small as single human cells. These advances have lead to an increased interest of the use of infrared spectroscopy as a medical diagnostic tool. Infrared spectra of myeloid leukemia (ML-1) cells are reported for cells derived from an asynchronous, exponentially-growing culture, as well as for cells that were fractionated according to their stage within the cell division cycle. The observed results suggest that the cells' DNA is detectable by infrared spectroscopy mainly when the cell is in the S phase, during the replication of DNA. In the G1 and G2 phases, the DNA is so tightly packed in the nucleus that it appears opaque to infrared radiation. Consequently, the nucleic acid spectral contributions in the G1 and G2 phases would be mostly that of cytoplasmic RNA. These results suggest that infrared spectral changes observed earlier between normal and abnormal cells may have been due to different distributions of cells within the stages of the cell division cycle.

  14. Automated fibre optic instrumentation for the William Herschel Telescope

    NASA Astrophysics Data System (ADS)

    Parry, Ian R.; Lewis, Ian J.

    1990-07-01

    The design and operation of the automated optical-fiber positioning system used for spectroscopic observations at the Cassegrain focus of the 4.2-m William Herschel Telescope (WHT) at Observatorio del Roque de los Muchachos are described. The system is a modified version of the Autofib positioner for the AAT and employs 64 spectroscopic fibers and 8 guide fiber bundles arranged to form a 17-arcmin-diameter field. The fibers are 1-m-long polyimide-coated high-OH silica, with core diameter 260 microns and outer diameter 315 microns, and a 1.2-mm side-length microprism is cemented to the end of each fiber or (7-fiber) guide bundle. The fibers are positioned one at a time by a pick-and-place robot assembly, and a viewing head permitting simultaneous observation of the back-illuminated fiber and the object it is trying to acquire is provided. This prototype Cassegrain-focus system is being studied to aid in the development of a more accurate fiber positioner for use at the prime focus of the WHT.

  15. Rapid determination of sugar level in snack products using infrared spectroscopy.

    PubMed

    Wang, Ting; Rodriguez-Saona, Luis E

    2012-08-01

    Real-time spectroscopic methods can provide a valuable window into food manufacturing to permit optimization of production rate, quality and safety. There is a need for cutting edge sensor technology directed at improving efficiency, throughput and reliability of critical processes. The aim of the research was to evaluate the feasibility of infrared systems combined with chemometric analysis to develop rapid methods for determination of sugars in cereal products. Samples were ground and spectra were collected using a mid-infrared (MIR) spectrometer equipped with a triple-bounce ZnSe MIRacle attenuated total reflectance accessory or Fourier transform near infrared (NIR) system equipped with a diffuse reflection-integrating sphere. Sugar contents were determined using a reference HPLC method. Partial least squares regression (PLSR) was used to create cross-validated calibration models. The predictability of the models was evaluated on an independent set of samples and compared with reference techniques. MIR and NIR spectra showed characteristic absorption bands for sugars, and generated excellent PLSR models (sucrose: SEP < 1.7% and r > 0.96). Multivariate models accurately and precisely predicted sugar level in snacks allowing for rapid analysis. This simple technique allows for reliable prediction of quality parameters, and automation enabling food manufacturers for early corrective actions that will ultimately save time and money while establishing a uniform quality. The U.S. snack food industry generates billions of dollars in revenue each year and vibrational spectroscopic methods combined with pattern recognition analysis could permit optimization of production rate, quality, and safety of many food products. This research showed that infrared spectroscopy is a powerful technique for near real-time (approximately 1 min) assessment of sugar content in various cereal products. © 2012 Institute of Food Technologists®

  16. An Algorithm for Building an Electronic Database.

    PubMed

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  17. Survey of Gravitationally-lensed Objects in HSC Imaging (SuGOHI). I. Automatic search for galaxy-scale strong lenses

    NASA Astrophysics Data System (ADS)

    Sonnenfeld, Alessandro; Chan, James H. H.; Shu, Yiping; More, Anupreeta; Oguri, Masamune; Suyu, Sherry H.; Wong, Kenneth C.; Lee, Chien-Hsiu; Coupon, Jean; Yonehara, Atsunori; Bolton, Adam S.; Jaelani, Anton T.; Tanaka, Masayuki; Miyazaki, Satoshi; Komiyama, Yutaka

    2018-01-01

    The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is an excellent survey for the search for strong lenses, thanks to its area, image quality, and depth. We use three different methods to look for lenses among 43000 luminous red galaxies from the Baryon Oscillation Spectroscopic Survey (BOSS) sample with photometry from the S16A internal data release of the HSC-SSP. The first method is a newly developed algorithm, named YATTALENS, which looks for arc-like features around massive galaxies and then estimates the likelihood of an object being a lens by performing a lens model fit. The second method, CHITAH, is a modeling-based algorithm originally developed to look for lensed quasars. The third method makes use of spectroscopic data to look for emission lines from objects at a different redshift from that of the main galaxy. We find 15 definite lenses, 36 highly probable lenses, and 282 possible lenses. Among the three methods, YATTALENS, which was developed specifically for this study, performs best in terms of both completeness and purity. Nevertheless, five highly probable lenses were missed by YATTALENS but found by the other two methods, indicating that the three methods are highly complementary. Based on these numbers, we expect to find ˜300 definite or probable lenses by the end of the HSC-SSP.

  18. Characterization of High Ge Content SiGe Heterostructures and Graded Alloy Layers Using Spectroscopic Ellipsometry

    NASA Technical Reports Server (NTRS)

    Heyd, A. R.; Alterovitz, S. A.; Croke, E. T.

    1995-01-01

    Si(x)Ge(1-x)heterostructures on Si substrates have been widely studied due to the maturity of Si technology. However, work on Si(x)Ge)1-x) heterostructures on Ge substrates has not received much attention. A Si(x)Ge(1-x) layer on a Si substrate is under compressive strain while Si(x)Ge(1-x) on Ge is under tensile strain; thus the critical points will behave differently. In order to accurately characterize high Ge content Si(x)Ge(1-x) layers the energy shift algorithm used to calculate alloy compositions, has been modified. These results have been used along with variable angle spectroscopic ellipsometry (VASE) measurements to characterize Si(x)Ge(1-x)/Ge superlattices grown on Ge substrates. The results agree closely with high resolution x-ray diffraction measurements made on the same samples. The modified energy shift algorithm also allows the VASE analysis to be upgraded in order to characterize linearly graded layers. In this work VASE has been used to characterize graded Si(x)Ge(1-x) layers in terms of the total thickness, and the start and end alloy composition. Results are presented for a 1 micrometer Si(x)Ge(1-x) layer linearly graded in the range 0.5 less than or equal to x less than or equal to 1.0.

  19. 76 FR 23996 - North Pacific Fishery Management Council Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-29

    ... uncertainty/total catch accounting; review/approve Halibut Mortality on trawlers Exempted Fishing Permit (EFP... & Wildlife Service Report. 2. Catch Sharing Plan(CSP): Review CSP size limit algorithm. 3. BSAI Crab Draft Stock Assessment Fishery Evaluation report: Review and approve catch specifications for Norton Sound Red...

  20. UAV Control on the Basis of 3D Landmark Bearing-Only Observations.

    PubMed

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-11-27

    The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, David R.; Cherinka, Brian; Yan, Renbin

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  2. Cosmic voids and void lensing in the Dark Energy Survey science verification data

    DOE PAGES

    Sánchez, C.; Clampitt, J.; Kovacs, A.; ...

    2016-10-26

    Galaxies and their dark matter halos populate a complicated filamentary network around large, nearly empty regions known as cosmic voids. Cosmic voids are usually identified in spectroscopic galaxy surveys, where 3D information about the large-scale structure of the Universe is available. Although an increasing amount of photometric data is being produced, its potential for void studies is limited since photometric redshifts induce line-of-sight position errors of ~50 Mpc/h or more that can render many voids undetectable. In this paper we present a new void finder designed for photometric surveys, validate it using simulations, and apply it to the high-quality photo-zmore » redMaGiC galaxy sample of the Dark Energy Survey Science Verification (DES-SV) data. The algorithm works by projecting galaxies into 2D slices and finding voids in the smoothed 2D galaxy density field of the slice. Fixing the line-of-sight size of the slices to be at least twice the photo- z scatter, the number of voids found in these projected slices of simulated spectroscopic and photometric galaxy catalogs is within 20% for all transverse void sizes, and indistinguishable for the largest voids of radius ~70 Mpc/h and larger. The positions, radii, and projected galaxy profiles of photometric voids also accurately match the spectroscopic void sample. Applying the algorithm to the DES-SV data in the redshift range 0.2 < z < 0.8 , we identify 87 voids with comoving radii spanning the range 18-120 Mpc/h, and carry out a stacked weak lensing measurement. With a significance of 4.4σ, the lensing measurement confirms the voids are truly underdense in the matter field and hence not a product of Poisson noise, tracer density effects or systematics in the data. In conclusion, it also demonstrates, for the first time in real data, the viability of void lensing studies in photometric surveys.« less

  3. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less

  4. Nonlinear X-Ray and Auger Spectroscopy at X-Ray Free-Electron Laser Sources

    NASA Astrophysics Data System (ADS)

    Rohringer, Nina

    2015-05-01

    X-ray free-electron lasers (XFELs) open the pathway to transfer non-linear spectroscopic techniques to the x-ray domain. A promising all x-ray pump probe technique is based on coherent stimulated electronic x-ray Raman scattering, which was recently demonstrated in atomic neon. By tuning the XFEL pulse to core-excited resonances, a few seed photons in the spectral tail of the XFEL pulse drive an avalanche of resonant inelastic x-ray scattering events, resulting in exponential amplification of the scattering signal by of 6-7 orders of magnitude. Analysis of the line profile of the emitted radiation permits to demonstrate the cross over from amplified spontaneous emission to coherent stimulated resonance scattering. In combination with statistical covariance mapping, a high-resolution spectrum of the resonant inelastic scattering process can be obtained, opening the path to coherent stimulated x-ray Raman spectroscopy. An extension of these ideas to molecules and a realistic feasibility study of stimulated electronic x-ray Raman scattering in CO will be presented. Challenges to realizing stimulated electronic x-ray Raman scattering at present-day XFEL sources will be discussed, corroborated by results of a recent experiment at the LCLS XFEL. Due to the small gain cross section in molecular targets, other nonlinear spectroscopic techniques such as nonlinear Auger spectroscopy could become a powerful alternative. Theory predictions of a novel pump probe technique based on resonant nonlinear Auger spectroscopic will be discussed and the method will be compared to stimulated x-ray Raman spectroscopy.

  5. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  6. Calibration and Data Retrieval Algorithms for the NASA Langley/Ames Diode Laser Hygrometer for the NASA Trace-P Mission

    NASA Technical Reports Server (NTRS)

    Podolske, James R.; Sachse, Glen W.; Diskin, Glenn S.; Hipskino, R. Stephen (Technical Monitor)

    2002-01-01

    This paper describes the procedures and algorithms for the laboratory calibration and the field data retrieval of the NASA Langley / Ames Diode Laser Hygrometer as implemented during the NASA Trace-P mission during February to April 2000. The calibration is based on a NIST traceable dewpoint hygrometer using relatively high humidity and short pathlength. Two water lines of widely different strengths are used to increase the dynamic range of the instrument in the course of a flight. The laboratory results are incorporated into a numerical model of the second harmonic spectrum for each of the two spectral window regions using spectroscopic parameters from the HITRAN database and other sources, allowing water vapor retrieval at upper tropospheric and lower stratospheric temperatures and humidity levels. The data retrieval algorithm is simple, numerically stable, and accurate. A comparison with other water vapor instruments on board the NASA DC-8 and ER-2 aircraft is presented.

  7. Extensions to Polychain: Nonseparability Testing and Factoring Algorithm.

    DTIC Science & Technology

    1985-12-02

    Cientifico e Tecnologico - CNPq, Brazil. Reproduction in wholeI or in part is permitted for any purpose of the United States Government. A...supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnol6gico - CNPq, Brazil, and the Office of Naval Research, under contract N00014-85-K

  8. Mesures spectroscopiques de constituants et de polluants atmosphériques par techniques in situ et à distance, au sol ou embarquéesSpectroscopic measurements of atmospheric constituents and pollutants by in situ and remote techniques from the ground and in flight

    NASA Astrophysics Data System (ADS)

    Camy-Peyret, Claude; Payan, Sébastien; Jeseck, Pascal; Té, Yao

    2001-09-01

    Infrared spectroscopy is a powerful tool for precise measurements of atmospheric trace species concentrations through the use of characteristic spectral signatures of the different molecular species and their associated vibration-rotation bands in the mid- or near-infrared. Different methods based on quantitative spectroscopy permit tropospheric or stratospheric measurements: in situ long path absorption, atmospheric absorption/emission by Fourier transform spectroscopy with high spectral resolution instruments on the ground, airborne, balloon-borne or satellite-borne.

  9. Detection of surface impurity phases in high T.sub.C superconductors using thermally stimulated luminescence

    DOEpatents

    Cooke, D. Wayne; Jahan, Muhammad S.

    1989-01-01

    Detection of surface impurity phases in high-temperature superconducting materials. Thermally stimulated luminescence has been found to occur in insulating impurity phases which commonly exist in high-temperature superconducting materials. The present invention is sensitive to impurity phases occurring at a level of less than 1% with a probe depth of about 1 .mu.m which is the region of interest for many superconductivity applications. Spectroscopic and spatial resolution of the emitted light from a sample permits identification and location of the impurity species. Absence of luminescence, and thus of insulating phases, can be correlated with low values of rf surface resistance.

  10. Laser spectroscopic visualization of hydrogen bond motions in liquid water

    NASA Astrophysics Data System (ADS)

    Bratos, S.; Leicknam, J.-Cl.; Pommeret, S.; Gallot, G.

    2004-12-01

    Ultrafast pump-probe experiments are described permitting a visualization of molecular motions in diluted HDO/D 2O solutions. The experiments were realized in the mid-infrared spectral region with a time resolution of 150 fs. They were interpreted by a careful theoretical analysis, based on the correlation function approach of statistical mechanics. Combining experiment and theory, stretching motions of the OH⋯O bonds as well as HDO rotations were 'filmed' in real time. It was found that molecular rotations are the principal agent of hydrogen bond breaking and making in water. Recent literatures covering the subject, including molecular dynamics simulations, are reviewed in detail.

  11. Enhancing nuclear quadrupole resonance (NQR) signature detection leveraging interference suppression algorithms

    NASA Astrophysics Data System (ADS)

    DeBardelaben, James A.; Miller, Jeremy K.; Myrick, Wilbur L.; Miller, Joel B.; Gilbreath, G. Charmaine; Bajramaj, Blerta

    2012-06-01

    Nuclear quadrupole resonance (NQR) is a radio frequency (RF) magnetic spectroscopic technique that has been shown to detect and identify a wide range of explosive materials containing quadrupolar nuclei. The NQR response signal provides a unique signature of the material of interest. The signal is, however, very weak and can be masked by non-stationary RF interference (RFI) and thermal noise, limiting detection distance. In this paper, we investigate the bounds on the NQR detection range for ammonium nitrate. We leverage a low-cost RFI data acquisition system composed of inexpensive B-field sensing and commercial-off-the-shelf (COTS) software-defined radios (SDR). Using collected data as RFI reference signals, we apply adaptive filtering algorithms to mitigate RFI and enable NQR detection techniques to approach theoretical range bounds in tactical environments.

  12. Atmospheric constituent density profiles from full disk solar occultation experiments

    NASA Technical Reports Server (NTRS)

    Lumpe, J. D.; Chang, C. S.; Strickland, D. J.

    1991-01-01

    Mathematical methods are described which permit the derivation of the number of density profiles of atmospheric constituents from solar occultation measurements. The algorithm is first applied to measurements corresponding to an arbitrary solar-intensity distribution to calculate the normalized absorption profile. The application of Fourier transform to the integral equation yields a precise expression for the corresponding number density, and the solution is employed with the data given in the form of Laguerre polynomials. The algorithm is employed to calculate the results for the case of uniform distribution of solar intensity, and the results demonstrate the convergence properties of the method. The algorithm can be used to effectively model representative model-density profiles with constant and altitude-dependent scale heights.

  13. A Taylor weak-statement algorithm for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  14. Meterological correction of optical beam refraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukin, V.P.; Melamud, A.E.; Mironov, V.L.

    1986-02-01

    At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less

  15. Algorithm for measuring the internal quantum efficiency of individual injection lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommers, H.S. Jr.

    1978-05-01

    A new algorithm permits determination of the internal quantum efficiency eta/sub i/ of individual lasers. Above threshold, the current is partitioned into a ''coherent'' component driving the lasing modes and the ''noncoherent'' remainder. Below threshold the current is known to grow as exp(qV/n/sub 0/KT); the algorithm proposes that extrapolation of this equation into the lasing region measures the noncoherent remainder, enabling deduction of the coherent component and of its current derivative eta/sub i/. Measurements on five (AlGa)As double-heterojunction lasers cut from one wafer demonstrate the power of the new method. Comparison with band calculations of Stern shows that n/sub 0/more » originates in carrier degeneracy.« less

  16. A modified CoRoT detrend algorithm and the discovery of a new planetary companion

    NASA Astrophysics Data System (ADS)

    Boufleur, Rodrigo C.; Emilio, Marcelo; Janot-Pacheco, Eduardo; Andrade, Laerte; Ferraz-Mello, Sylvio; do Nascimento, José-Dias, Jr.; de La Reza, Ramiro

    2018-01-01

    We present MCDA, a modification of the COnvection ROtation and planetary Transits (CoRoT) detrend algorithm (CDA) suitable to detrend chromatic light curves. By means of robust statistics and better handling of short-term variability, the implementation decreases the systematic light-curve variations and improves the detection of exoplanets when compared with the original algorithm. All CoRoT chromatic light curves (a total of 65 655) were analysed with our algorithm. Dozens of new transit candidates and all previously known CoRoT exoplanets were rediscovered in those light curves using a box-fitting algorithm. For three of the new cases, spectroscopic measurements of the candidates' host stars were retrieved from the ESO Science Archive Facility and used to calculate stellar parameters and, in the best cases, radial velocities. In addition to our improved detrend technique, we announce the discovery of a planet that orbits a 0.79_{-0.09}^{+0.08} R⊙ star with a period of 6.718 37 ± 0.000 01 d and has 0.57_{-0.05}^{+0.06} RJ and 0.15 ± 0.10 MJ. We also present the analysis of two cases in which parameters found suggest the existence of possible planetary companions.

  17. The Large Area KX Quasar Survey: Photometric Redshift Selection and the Complete Quasar Catalogue

    NASA Astrophysics Data System (ADS)

    Maddox, Natasha; Hewett, P. C.; Peroux, C.

    2013-01-01

    We have completed a large area, ˜600 square degree, spectroscopic survey for luminous quasars flux-limited in the K-band. The survey utilises the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS) in regions of sky within the Sloan Digital Sky Survey (SDSS) footprint. We exploit the K-band excess (KX) of all quasars with respect to Galactic stars in combination with a custom-built photometric redshift/classification scheme to identify quasar candidates for spectroscopic follow-up observations. The survey is complete to K≤16.6, and includes >3200 known quasars from the SDSS, with more than 250 additional confirmed quasars from the KX-selection which eluded the SDSS quasar selection algorithm. The selection is >95% complete with respect to known SDSS quasars and >95% efficient, largely independent of redshift and magnitude. The KX-selected quasars will provide new constraints on the fraction of luminous quasars reddened by dust with E(B-V)≤0.5 mag. Several projects utilizing the KX quasars are ongoing, including a spectroscopic campaign searching for dusty quasar intervening absorption systems. The KX survey is a well-defined sample of quasars useful for investigating the properties of luminous quasars with intermediate levels of dust extinction either within their host galaxies or due to intervening absorption systems.

  18. Reproducibility study of whole-brain 1H spectroscopic imaging with automated quantification.

    PubMed

    Gu, Meng; Kim, Dong-Hyun; Mayer, Dirk; Sullivan, Edith V; Pfefferbaum, Adolf; Spielman, Daniel M

    2008-09-01

    A reproducibility study of proton MR spectroscopic imaging ((1)H-MRSI) of the human brain was conducted to evaluate the reliability of an automated 3D in vivo spectroscopic imaging acquisition and associated quantification algorithm. A PRESS-based pulse sequence was implemented using dualband spectral-spatial RF pulses designed to fully excite the singlet resonances of choline (Cho), creatine (Cre), and N-acetyl aspartate (NAA) while simultaneously suppressing water and lipids; 1% of the water signal was left to be used as a reference signal for robust data processing, and additional lipid suppression was obtained using adiabatic inversion recovery. Spiral k-space trajectories were used for fast spectral and spatial encoding yielding high-quality spectra from 1 cc voxels throughout the brain with a 13-min acquisition time. Data were acquired with an 8-channel phased-array coil and optimal signal-to-noise ratio (SNR) for the combined signals was achieved using a weighting based on the residual water signal. Automated quantification of the spectrum of each voxel was performed using LCModel. The complete study consisted of eight healthy adult subjects to assess intersubject variations and two subjects scanned six times each to assess intrasubject variations. The results demonstrate that reproducible whole-brain (1)H-MRSI data can be robustly obtained with the proposed methods.

  19. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  20. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  1. Generation and assessment of turntable SAR data for the support of ATR development

    NASA Astrophysics Data System (ADS)

    Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby

    1998-10-01

    Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.

  2. emcee: The MCMC Hammer

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Hogg, David W.; Lang, Dustin; Goodman, Jonathan

    2013-03-01

    We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to ˜N2 for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the GNU General Public License v2.

  3. Algorithm to determine the percolation largest component in interconnected networks.

    PubMed

    Schneider, Christian M; Araújo, Nuno A M; Herrmann, Hans J

    2013-04-01

    Interconnected networks have been shown to be much more vulnerable to random and targeted failures than isolated ones, raising several interesting questions regarding the identification and mitigation of their risk. The paradigm to address these questions is the percolation model, where the resilience of the system is quantified by the dependence of the size of the largest cluster on the number of failures. Numerically, the major challenge is the identification of this cluster and the calculation of its size. Here, we propose an efficient algorithm to tackle this problem. We show that the algorithm scales as O(NlogN), where N is the number of nodes in the network, a significant improvement compared to O(N(2)) for a greedy algorithm, which permits studying much larger networks. Our new strategy can be applied to any network topology and distribution of interdependencies, as well as any sequence of failures.

  4. A Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2001-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  5. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Test Suite for 3D Radiative Hydrodynamics Simulations of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Durisen, R. H.; Nordlund, A.; Lord, J.

    2006-12-01

    Radiative hydrodynamics simulations of protoplanetary disks with different treatments for radiative cooling demonstrate disparate evolutions (see Durisen et al. 2006, PPV chapter). Some of these differences include the effects of convection and metallicity on disk cooling and the susceptibility of the disk to fragmentation. Because a principal reason for these differences may be the treatment of radiative cooling, the accuracy of cooling algorithms must be evaluated. In this paper we describe a radiative transport test suite, and we challenge all researchers who use radiative hydrodynamics to study protoplanetary disk evolution to evaluate their algorithms with these tests. The test suite can be used to demonstrate an algorithm's accuracy in transporting the correct flux through an atmosphere and in reaching the correct temperature structure, to test the algorithm's dependence on resolution, and to determine whether the algorithm permits of inhibits convection when expected. In addition, we use this test suite to demonstrate the accuracy of a newly developed radiative cooling algorithm that combines vertical rays with flux-limited diffusion. This research was supported in part by a Graduate Student Researchers Program fellowship.

  7. The spectroscopic indistinguishability of red giant branch and red clump stars

    NASA Astrophysics Data System (ADS)

    Masseron, T.; Hawkins, K.

    2017-01-01

    Context. Stellar spectroscopy provides useful information on the physical properties of stars such as effective temperature, metallicity and surface gravity. However, those photospheric characteristics are often hampered by systematic uncertainties. The joint spectro-sismo project (APOGEE+Kepler, aka APOKASC) of field red giants has revealed a puzzling offset between the surface gravities (log g) determined spectroscopically and those determined using asteroseismology, which is largely dependent on the stellar evolutionary status. Aims: Therefore, in this letter, we aim to shed light on the spectroscopic source of the offset. Methods: We used the APOKASC sample to analyse the dependencies of the log g discrepancy as a function of stellar mass and stellar evolutionary status. We discuss and study the impact of some neglected abundances on spectral analysis of red giants, such as He and carbon isotopic ratio. Results: We first show that, for stars at the bottom of the red giant branch where the first dredge-up had occurred, the discrepancy between spectroscopic log g and asteroseismic log g depends on stellar mass. This seems to indicate that the log g discrepancy is related to CN cycling. Among the CN-cycled elements, we demonstrate that the carbon isotopic ratio (12C /13C) has the largest impact on stellar spectrum. In parallel, we observe that this log g discrepancy shows a similar trend as the 12C /13C ratios as expected by stellar evolution theory. Although we did not detect a direct spectroscopic signature of 13C, other corroborating evidences suggest that the discrepancy in log g is tightly correlated to the production of 13C in red giants. Moreover, by running the data-driven algorithm (the Cannon) on a synthetic grid trained on the APOGEE data, we try to evaluate more quantitatively the impact of various 12C /13C ratios. Conclusions: While we have demonstrated that 13C indeed impacts all parameters, the size of the impact is smaller than the observed offset in log g. If further tests confirm that 13C is not the main element responsible of the log g problem, the number of spectroscopic effects remaining to be investigated is now relatively limited (if any).

  8. Detection of alpha radiation in a beta radiation field

    DOEpatents

    Mohagheghi, Amir H.; Reese, Robert P.

    2001-01-01

    An apparatus and method for detecting alpha particles in the presence of high activities of beta particles utilizing an alpha spectrometer. The apparatus of the present invention utilizes a magnetic field applied around the sample in an alpha spectrometer to deflect the beta particles from the sample prior to reaching the detector, thus permitting detection of low concentrations of alpha particles. In the method of the invention, the strength of magnetic field required to adequately deflect the beta particles and permit alpha particle detection is given by an algorithm that controls the field strength as a function of sample beta energy and the distance of the sample to the detector.

  9. Planning the FUSE Mission Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  10. Spectroscopic diagnosis of laryngeal carcinoma using near-infrared Raman spectroscopy and random recursive partitioning ensemble techniques.

    PubMed

    Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei

    2009-06-01

    In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.

  11. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  12. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  14. Techniques for the Analysis of Spectral and Orbital Congestion in Space Systems.

    DTIC Science & Technology

    1984-03-01

    Appendix 29 gives the appropriate equations ... .. - 87 - for the two cases, and provides algorithms for polarization isolation, I topocentric and geocentric ...The PDP form is maintained by MITRE Dept. D97, which provides services to run the program when staffing permits. NASA Lewis has used the results in a

  15. Generating Hierarchical Document Indices from Common Denominators in Large Document Collections.

    ERIC Educational Resources Information Center

    O'Kane, Kevin C.

    1996-01-01

    Describes an algorithm for computer generation of hierarchical indexes for document collections. The resulting index, when presented with a graphical interface, provides users with a view of a document collection that permits general browsing and informal search activities via an access method that requires no keyboard entry or prior knowledge of…

  16. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  17. UAV Control on the Basis of 3D Landmark Bearing-Only Observations

    PubMed Central

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-01-01

    The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394

  18. Haemoglobinopathy diagnosis: algorithms, lessons and pitfalls.

    PubMed

    Bain, Barbara J

    2011-09-01

    Diagnosis of haemoglobinopathies, including thalassaemias, can result from either a clinical suspicion of a disorder of globin chain synthesis or from follow-up of an abnormality detected during screening. Screening may be carried out as part of a well defined screening programme or be an ad hoc or opportunistic test. Screening may be preoperative, neonatal, antenatal, preconceptual, premarriage or targeted at specific groups perceived to be at risk. Screening in the setting of haemoglobinopathies may be directed at optimising management of a disorder by early diagnosis, permitting informed reproductive choice or preventing a serious disorder by offering termination of pregnancy. Diagnostic methods and algorithms will differ according to the setting. As the primary test, high performance liquid chromatography is increasingly used and haemoglobin electrophoresis less so with isoelectric focussing being largely confined to screening programmes and referral centres, particularly in newborns. Capillary electrophoresis is being increasingly used. All these methods permit only a presumptive diagnosis with definitive diagnosis requiring either DNA analysis or protein analysis, for example by tandem mass spectrometry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. SPIDERS: selection of spectroscopic targets using AGN candidates detected in all-sky X-ray surveys

    NASA Astrophysics Data System (ADS)

    Dwelly, T.; Salvato, M.; Merloni, A.; Brusa, M.; Buchner, J.; Anderson, S. F.; Boller, Th.; Brandt, W. N.; Budavári, T.; Clerc, N.; Coffey, D.; Del Moro, A.; Georgakakis, A.; Green, P. J.; Jin, C.; Menzel, M.-L.; Myers, A. D.; Nandra, K.; Nichol, R. C.; Ridl, J.; Schwope, A. D.; Simm, T.

    2017-07-01

    SPIDERS (SPectroscopic IDentification of eROSITA Sources) is a Sloan Digital Sky Survey IV (SDSS-IV) survey running in parallel to the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) cosmology project. SPIDERS will obtain optical spectroscopy for large numbers of X-ray-selected active galactic nuclei (AGN) and galaxy cluster members detected in wide-area eROSITA, XMM-Newton and ROSAT surveys. We describe the methods used to choose spectroscopic targets for two sub-programmes of SPIDERS X-ray selected AGN candidates detected in the ROSAT All Sky and the XMM-Newton Slew surveys. We have exploited a Bayesian cross-matching algorithm, guided by priors based on mid-IR colour-magnitude information from the Wide-field Infrared Survey Explorer survey, to select the most probable optical counterpart to each X-ray detection. We empirically demonstrate the high fidelity of our counterpart selection method using a reference sample of bright well-localized X-ray sources collated from XMM-Newton, Chandra and Swift-XRT serendipitous catalogues, and also by examining blank-sky locations. We describe the down-selection steps which resulted in the final set of SPIDERS-AGN targets put forward for spectroscopy within the eBOSS/TDSS/SPIDERS survey, and present catalogues of these targets. We also present catalogues of ˜12 000 ROSAT and ˜1500 XMM-Newton Slew survey sources that have existing optical spectroscopy from SDSS-DR12, including the results of our visual inspections. On completion of the SPIDERS programme, we expect to have collected homogeneous spectroscopic redshift information over a footprint of ˜7500 deg2 for >85 per cent of the ROSAT and XMM-Newton Slew survey sources having optical counterparts in the magnitude range 17 < r < 22.5, producing a large and highly complete sample of bright X-ray-selected AGN suitable for statistical studies of AGN evolution and clustering.

  20. Modeling and Observations of Massive Binaries with the B[e] Phenomenon

    NASA Astrophysics Data System (ADS)

    Lobel, A.; Martayan, C.; Mehner, A.; Groh, J. H.

    2017-02-01

    We report a long-term high-resolution spectroscopic monitoring program of LBVs and candidate LBVs with Mercator-HERMES. Based on 7 years of data, we recently showed that supergiant MWC 314 is a (Galactic) semi-detached eccentric binary with stationary permitted and forbidden emission lines in the optical and near-IR region. MWC 314 is a luminous and massive probable LBV star showing a strongly orbitally-modulated wind variability. We observe discrete absorption components in P Cyg He I lines signaling large-scale wind structures. In 2014 XMM observed X-rays indicating strong wind-wind collision in the close binary system (a ≃1 AU). A VLT-NACO imaging survey recently revealed that MWC 314 is a triple hierarchical system. We present a 3-D non-LTE radiative transfer model of the extended asymmetric wind structure around the primary B0 supergiant for modeling the orbital variability of P Cyg absorption (v∞˜1200 km s-1) in He I lines. An analysis of the HERMES monitoring spectra of the Galactic LBV star MWC 930 however does not show clear indications of a spectroscopic binary. The detailed long-term spectroscopic variability of this massive B[e] star is very similar to the spectroscopic variability of the prototypical blue hypergiant S Dor in the LMC. We observe prominent P Cyg line shapes in MWC 930 that temporarily transform into split absorption line cores during variability phases of its S Dor cycle over the past decade with a brightening in V of ˜ 1.2 mag. The line splitting phenomenon is very similar to the split metal line cores observed in pulsating Yellow Hypergiants ρ Cas (F-K Ia+) and HR 8752 (A-K Ia+) with [Ca II] and [N II] emission lines. We propose the line core splitting in MWC 930 is due to optically thick central line emission produced in the inner ionized wind region becoming mechanically shock-excited with the increase of R* and decrease of Teff of the LBV.

  1. Utilization of Solar Dynamics Observatory space weather digital image data for comparative analysis with application to Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Shekoyan, V.; Dehipawala, S.; Liu, Ernest; Tulsee, Vivek; Armendariz, R.; Tremberger, G.; Holden, T.; Marchese, P.; Cheung, T.

    2012-10-01

    Digital solar image data is available to users with access to standard, mass-market software. Many scientific projects utilize the Flexible Image Transport System (FITS) format, which requires specialized software typically used in astrophysical research. Data in the FITS format includes photometric and spatial calibration information, which may not be useful to researchers working with self-calibrated, comparative approaches. This project examines the advantages of using mass-market software with readily downloadable image data from the Solar Dynamics Observatory for comparative analysis over with the use of specialized software capable of reading data in the FITS format. Comparative analyses of brightness statistics that describe the solar disk in the study of magnetic energy using algorithms included in mass-market software have been shown to give results similar to analyses using FITS data. The entanglement of magnetic energy associated with solar eruptions, as well as the development of such eruptions, has been characterized successfully using mass-market software. The proposed algorithm would help to establish a publicly accessible, computing network that could assist in exploratory studies of all FITS data. The advances in computer, cell phone and tablet technology could incorporate such an approach readily for the enhancement of high school and first-year college space weather education on a global scale. Application to ground based data such as that contained in the Baryon Oscillation Spectroscopic Survey is discussed.

  2. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  3. Detection of cervical intraepithelial neoplasias and cancers in cervical tissue by in vivo light scattering

    PubMed Central

    Mourant, Judith R.; Bocklage, Thérese J.; Powers, Tamara M.; Greene, Heather M.; Dorin, Maxine H.; Waxman, Alan G.; Zsemlye, Meggan M.; Smith, Harriet O.

    2009-01-01

    Objective To examine the utility of in vivo elastic light scattering measurements to identify cervical intraepithelial neoplasias (CIN) 2/3 and cancers in women undergoing colposcopy and to determine the effects of patient characteristics such as menstrual status on the elastic light scattering spectroscopic measurements. Materials and Methods A fiber optic probe was used to measure light transport in the cervical epithelium of patients undergoing colposcopy. Spectroscopic results from 151 patients were compared with histopathology of the measured and biopsied sites. A method of classifying the measured sites into two clinically relevant categories was developed and tested using five-fold cross-validation. Results Statistically significant effects by age at diagnosis, menopausal status, timing of the menstrual cycle, and oral contraceptive use were identified, and adjustments based upon these measurements were incorporated in the classification algorithm. A sensitivity of 77±5% and a specificity of 62±2% were obtained for separating CIN 2/3 and cancer from other pathologies and normal tissue. Conclusions The effects of both menstrual status and age should be taken into account in the algorithm for classifying tissue sites based on elastic light scattering spectroscopy. When this is done, elastic light scattering spectroscopy shows good potential for real-time diagnosis of cervical tissue at colposcopy. Guiding biopsy location is one potential near-term clinical application area, while facilitating ”see and treat” protocols is a longer term goal. Improvements in accuracy are essential. PMID:20694193

  4. A globally well-posed finite element algorithm for aerodynamics applications

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.; Baker, A. J.

    1991-01-01

    A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.

  5. Gap filling of 3-D microvascular networks by tensor voting.

    PubMed

    Risser, L; Plouraboue, F; Descombes, X

    2008-05-01

    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated.

  6. Carbon monoxide mixing ratio inference from gas filter radiometer data

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.

    1983-01-01

    A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.

  7. Technique for Chestband Contour Shape-Mapping in Lateral Impact

    PubMed Central

    Hallman, Jason J; Yoganandan, Narayan; Pintar, Frank A

    2011-01-01

    The chestband transducer permits noninvasive measurement of transverse plane biomechanical response during blunt thorax impact. Although experiments may reveal complex two-dimensional (2D) deformation response to boundary conditions, biomechanical studies have heretofore employed only uniaxial chestband contour quantifying measurements. The present study described and evaluated an algorithm by which source subject-specific contour data may be systematically mapped to a target generalized anthropometry for computational studies of biomechanical response or anthropomorphic test dummy development. Algorithm performance was evaluated using chestband contour datasets from two rigid lateral impact boundary conditions: Flat wall and anterior-oblique wall. Comparing source and target anthropometry contours, peak deflections and deformation-time traces deviated by less than 4%. These results suggest that the algorithm is appropriate for 2D deformation response to lateral impact boundary conditions. PMID:21676399

  8. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Dean J.; Harding, Lee T.

    Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less

  10. Chromatographic and spectroscopic identification and recognition of ammoniacal cochineal dyes and pigments.

    PubMed

    Chieli, A; Sanyova, J; Doherty, B; Brunetti, B G; Miliani, C

    2016-06-05

    In this work a combined chromatographic and spectroscopic approach is used to provide a diagnostic assessment of semi-synthetic ammoniacal cochineal through the syntheses of its dyes and lakes according to art historical recipes. Commercially introduced in the late XIX century as a dye and pigment, it was used to obtain a brilliant purplish/violet nuance which provided a more stable option over carminic acid although its evidenced use in manufacts and artworks of heritage importance have been scarcely documented. Through HPLC-DAD, it has been possible to identify 4-aminocarminic acid as the main component of ammoniacal cochineal highlighting a chemical formula analogous to acid stable carmine, a recent patented food dye. FTIR clearly distinguishes the amine group in the ammoniacal cochineal dye preparation and TLC-SERS allows for an adequate separation and spectral differentiation in its main components to be evidenced. Colloidal SERS has permitted spectral markers useful in discerning ammoniacal cochineal over carminic acid to be highlighted and discussed. Finally, the methods experimented in this study for the identification of ammoniacal cochineal have been validated on analyzing a sample of dyed wool. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Spectroscopic and Kinetic Properties of the Molybdenum-containing, NAD+-dependent Formate Dehydrogenase from Ralstonia eutropha*

    PubMed Central

    Niks, Dimitri; Duvvuru, Jayant; Escalona, Miguel; Hille, Russ

    2016-01-01

    We have examined the rapid reaction kinetics and spectroscopic properties of the molybdenum-containing, NAD+-dependent FdsABG formate dehydrogenase from Ralstonia eutropha. We confirm previous steady-state studies of the enzyme and extend its characterization to a rapid kinetic study of the reductive half-reaction (the reaction of formate with oxidized enzyme). We have also characterized the electron paramagnetic resonance signal of the molybdenum center in its MoV state and demonstrated the direct transfer of the substrate Cα hydrogen to the molybdenum center in the course of the reaction. Varying temperature, microwave power, and level of enzyme reduction, we are able to clearly identify the electron paramagnetic resonance signals for four of the iron/sulfur clusters of the enzyme and find suggestive evidence for two others; we observe a magnetic interaction between the molybdenum center and one of the iron/sulfur centers, permitting assignment of this signal to a specific iron/sulfur cluster in the enzyme. In light of recent advances in our understanding of the structure of the molybdenum center, we propose a reaction mechanism involving direct hydride transfer from formate to a molybdenum-sulfur group of the molybdenum center. PMID:26553877

  12. Effect of solvent polarity on the spectroscopic properties of an alkynyl gold(i) gelator. The particular case of water.

    PubMed

    Gavara, Raquel; Lima, João Carlos; Rodríguez, Laura

    2016-05-11

    The spectroscopic properties of aggregates obtained from the hydrogelator [Au(4-pyridylethynyl)(PTA)] were studied in solvents of different polarities. Inspection of the absorption and emission spectra of diluted solutions showed that the singlet ground state of the monomeric species is sensitive to polarity and is stabilized in more polar solvents whereas the triplet excited state is rather insensitive to changes in polarity. The study of relatively concentrated solutions revealed the presence of new emission and excitation bands at 77 K that was attributed to the presence of different kinds of aggregates. Particularly interesting behaviour was revealed in water where aggregation is observed to be more efficient. For this, absorption, emission quantum yields and luminescence lifetimes of aqueous solutions at different concentrations were investigated in more detail. These data permitted one to correlate the increase of non-radiative and radiative rate constants of the low lying triplet emissive state with concentration, and therefore with the low limit concentration for aggregation, due to the shortening of the AuAu average distances in the aggregates and consequent enhancement of the spin-orbit coupling in the system.

  13. USB 3.0 readout and time-walk correction method for Timepix3 detector

    NASA Astrophysics Data System (ADS)

    Turecek, D.; Jakubek, J.; Soukup, P.

    2016-12-01

    The hybrid particle counting pixel detectors of Medipix family are well known. In this contribution we present new USB 3.0 based interface AdvaDAQ for Timepix3 detector. The AdvaDAQ interface is designed with a maximal emphasis to the flexibility. It is successor of FitPIX interface developed in IEAP CTU in Prague. Its modular architecture supports all Medipix/Timepix chips and all their different readout modes: Medipix2, Timepix (serial and parallel), Medipix3 and Timepix3. The high bandwidth of USB 3.0 permits readout of 1700 full frames per second with Timepix or 8 channel data acquisition from Timepix3 at frequency of 320 MHz. The control and data acquisition is integrated in a multiplatform PiXet software (MS Windows, Mac OS, Linux). In the second part of the publication a new method for correction of the time-walk effect in Timepix3 is described. Moreover, a fully spectroscopic X-ray imaging with Timepix3 detector operated in the ToT mode (Time-over-Threshold) is presented. It is shown that the AdvaDAQ's readout speed is sufficient to perform spectroscopic measurement at full intensity of radiographic setups equipped with nano- or micro-focus X-ray tubes.

  14. First-principles anharmonic quantum calculations for peptide spectroscopy: VSCF calculations and comparison with experiments.

    PubMed

    Roy, Tapta Kanchan; Sharma, Rahul; Gerber, R Benny

    2016-01-21

    First-principles quantum calculations for anharmonic vibrational spectroscopy of three protected dipeptides are carried out and compared with experimental data. Using hybrid HF/MP2 potentials, the Vibrational Self-Consistent Field with Second-Order Perturbation Correction (VSCF-PT2) algorithm is used to compute the spectra without any ad hoc scaling or fitting. All of the vibrational modes (135 for the largest system) are treated quantum mechanically and anharmonically using full pair-wise coupling potentials to represent the interaction between different modes. In the hybrid potential scheme the MP2 method is used for the harmonic part of the potential and a modified HF method is used for the anharmonic part. The overall agreement between computed spectra and experiment is very good and reveals different signatures for different conformers. This study shows that first-principles spectroscopic calculations of good accuracy are possible for dipeptides hence it opens possibilities for determination of dipeptide conformer structures by comparison of spectroscopic calculations with experiment.

  15. Investigating a physical basis for spectroscopic estimates of leaf nitrogen concentration

    USGS Publications Warehouse

    Kokaly, R.F.

    2001-01-01

    The reflectance spectra of dried and ground plant foliage are examined for changes directly due to increasing nitrogen concentration. A broadening of the 2.1-??m absorption feature is observed as nitrogen concentration increases. The broadening is shown to arise from two absorptions at 2.054 ??m and 2.172 ??m. The wavelength positions of these absorptions coincide with the absorption characteristics of the nitrogen-containing amide bonds in proteins. The observed presence of these absorption features in the reflectance spectra of dried foliage is suggested to form a physical basis for high correlations established by stepwise multiple linear regression techniques between the reflectance of dry plant samples and their nitrogen concentration. The consistent change in the 2.1-??m absorption feature as nitrogen increases and the offset position of protein absorptions compared to those of other plant components together indicate that a generally applicable algorithm may be developed for spectroscopic estimates of nitrogen concentration from the reflectance spectra of dried plant foliage samples. ?? 2001 Published by Elsevier Science Ireland Ltd.

  16. Reduced electron exposure for energy-dispersive spectroscopy using dynamic sampling

    DOE PAGES

    Zhang, Yan; Godaliyadda, G. M. Dilshan; Ferrier, Nicola; ...

    2017-10-23

    Analytical electron microscopy and spectroscopy of biological specimens, polymers, and other beam sensitive materials has been a challenging area due to irradiation damage. There is a pressing need to develop novel imaging and spectroscopic imaging methods that will minimize such sample damage as well as reduce the data acquisition time. The latter is useful for high-throughput analysis of materials structure and chemistry. Here, in this work, we present a novel machine learning based method for dynamic sparse sampling of EDS data using a scanning electron microscope. Our method, based on the supervised learning approach for dynamic sampling algorithm and neuralmore » networks based classification of EDS data, allows a dramatic reduction in the total sampling of up to 90%, while maintaining the fidelity of the reconstructed elemental maps and spectroscopic data. In conclusion, we believe this approach will enable imaging and elemental mapping of materials that would otherwise be inaccessible to these analysis techniques.« less

  17. ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning

    NASA Astrophysics Data System (ADS)

    Sadeh, I.; Abdalla, F. B.; Lahav, O.

    2016-10-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.

  18. Spectroscopic Detection of Caries Lesions

    PubMed Central

    Ruohonen, Mika; Palo, Katri; Alander, Jarmo

    2013-01-01

    Background. A caries lesion causes changes in the optical properties of the affected tissue. Currently a caries lesion can be detected only at a relatively late stage of development. Caries diagnosis also suffers from high interobserver variance. Methods. This is a pilot study to test the suitability of an optical diffuse reflectance spectroscopy for caries diagnosis. Reflectance visible/near-infrared spectroscopy (VIS/NIRS) was used to measure caries lesions and healthy enamel on extracted human teeth. The results were analysed with a computational algorithm in order to find a rule-based classification method to detect caries lesions. Results. The classification indicated that the measured points of enamel could be assigned to one of three classes: healthy enamel, a caries lesion, and stained healthy enamel. The features that enabled this were consistent with theory. Conclusions. It seems that spectroscopic measurements can help to reduce false positives at in vitro setting. However, further research is required to evaluate the strength of the evidence for the method's performance. PMID:27006907

  19. New Insights on the White Dwarf Luminosity and Mass Functions from the LSS-GAC Survey

    NASA Astrophysics Data System (ADS)

    Rebassa-Mansergas, Alberto; Liu, Xiaowei; Cojocaru, Ruxandra; Torres, Santiago; García–Berro, Enrique; Yuan, Haibo; Huang, Yang; Xiang, Maosheng

    2015-06-01

    The white dwarf (WD) population observed in magnitude-limited surveys can be used to derive the luminosity function (LF) and mass function (MF), once the corresponding volume corrections are employed. However, the WD samples from which the observational LFs and MFs are built are the result of complicated target selection algorithms. Thus, it is difficult to quantify the effects of the observational biases on the observed functions. The LAMOST (Large sky Area Multi-Object fiber Spectroscopic Telescope) spectroscopic survey of the Galactic anti-center (LSS-GAC) has well-defined selection criteria. This is a noticeable advantage over previous surveys. Here we derive the WD LF and MF of the LSS-GAC, and use a Monte Carlo code to simulate the WD population in the Galactic anti-center. We apply the well-defined LSS-GAC selection criteria to the simulated populations, taking into account all observational biases, and perform the first meaningful comparison between the simulated WD LFs and MFs and the observed ones.

  20. Three-dimensional tracking for efficient fire fighting in complex situations

    NASA Astrophysics Data System (ADS)

    Akhloufi, Moulay; Rossi, Lucile

    2009-05-01

    Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.

  1. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  2. Nondestructive multispectral reflectoscopy between 800 and 1900 nm: An instrument for the investigation of the stratigraphy in paintings.

    PubMed

    Karagiannis, G; Salpistis, Chr; Sergiadis, G; Chryssoulakis, Y

    2007-06-01

    In the present work, a powerful tool for the investigation of paintings is presented. This permits the tuneable multispectral real time imaging between 200 and 5000 nm and the simultaneous multispectral acquisition of spectroscopic data from the same region. We propose the term infrared reflectoscopy for tuneable infrared imaging in paintings (Chryssonlakis and Chassery, The Application of Physicochemical Methods of Analysis and Image Processing Techniques to Painted Works of Art, Erasmus Project ICP-88-006-6, Athens, June, 1989) for a technique that is effective especially when the spectroscopic data acquisition is performed between 800 and 1900 nm. Elements such as underdrawings, old damage that is not visible to the naked eye, later interventions or overpaintings, hidden signatures, nonvisible inscriptions, and authenticity features can thus be detected with the overlying paint layers becoming successively "transparent" due to the deep infrared penetration. The spectroscopic data are collected from each point of the studied area with a 5 nm step through grey level measurement, after adequate infrared reflectance (%R) and curve calibration. The detection limits of the infrared detector as well as the power distribution of the radiation coming out through the micrometer slit assembly of the monochromator in use are also taken into account. Inorganic pigments can thus be identified and their physicochemical properties directly compared to the corresponding infrared images at each wavelength within the optimum region. In order to check its effectiveness, this method was applied on an experimental portable icon of a known stratigraphy.

  3. Integrated RF-shim coil allowing two degrees of freedom shim current.

    PubMed

    Jiazheng Zhou; Ying-Hua Chu; Yi-Cheng Hsu; Pu-Yeh Wu; Stockmann, Jason P; Fa-Hsuan Lin

    2016-08-01

    High-quality magnetic resonance imaging and spectroscopic measurements require a highly homogeneous magnetic field. Different from global shimming, localized off-resonance can be corrected by using multi-coil shimming. Previously, integrated RF and shimming coils have been used to implement multi-coil shimming. Such coils share the same conductor for RF signal reception and shim field generation. Here we propose a new design of the integrated RF-shim coil at 3-tesla, where two independent shim current paths are allowed in each coil. This coil permits a higher degree of freedom in shim current distribution design. We use both phantom experiments and simulations to demonstrate the feasibility of this new design.

  4. Infrared Microtransmission And Microreflectance Of Biological Systems

    NASA Astrophysics Data System (ADS)

    Hill, Steve L.; Krishnan, K.; Powell, Jay R.

    1989-12-01

    The infrared microsampling technique has been successfully applied to a variety of biological systems. A microtomed tissue section may be prepared to permit both visual and infrared discrimination. Infrared structural information may be obtained for a single cell, and computer-enhanced images of tissue specimens may be calculated from spectral map data sets. An analysis of a tissue section anomaly may gg suest eitherprotein compositional differences or a localized concentration of foreign matterp. Opaque biological materials such as teeth, gallstones, and kidney stones may be analyzed by microreflectance spectroscop. Absorption anomalies due to specular dispersion are corrected with the Kraymers-Kronig transformation. Corrected microreflectance spectra may contribute to compositional analysis and correlate diseased-related spectral differences to visual specimen anomalies.

  5. NQR: From imaging to explosives and drugs detection

    NASA Astrophysics Data System (ADS)

    Osán, Tristán M.; Cerioni, Lucas M. C.; Forguez, José; Ollé, Juan M.; Pusiol, Daniel J.

    2007-02-01

    The main aim of this work is to present an overview of the nuclear quadrupole resonance (NQR) spectroscopy capabilities for solid state imaging and detection of illegal substances, such as explosives and drugs. We briefly discuss the evolution of different NQR imaging techniques, in particular those involving spatial encoding which permit conservation of spectroscopic information. It has been shown that plastic explosives and other forbidden substances cannot be easily detected by means of conventional inspection techniques, such as those based on conventional X-ray technology. For this kind of applications, the experimental results show that the information inferred from NQR spectroscopy provides excellent means to perform volumetric and surface detection of dangerous explosive and drug compounds.

  6. Integrated approach to ischemic heart disease. The one-stop shop.

    PubMed

    Kramer, C M

    1998-05-01

    Magnetic resonance imaging is unique in its variety of applications for imaging the cardiovascular system. A thorough assessment of myocardial structure, function, and perfusion; assessment of coronary artery anatomy and flow; and spectroscopic evaluation of cardiac energetics can be readily performed by magnetic resonance imaging. One key to the advancement of cardiac magnetic resonance imaging as a clinical tool in the evaluation, the so called one stop shop. Improvements in magnetic resonance hardware, software, and imaging speed now permit this integrated examination. Cardiac magnetic resonance is a powerful technique with the potential to replace or complement other commonly used techniques in the diagnostic armamentarium of physicians caring for patients with ischemic heart disease.

  7. New Ground-based Spectral Observations of Mercury and Comparison with the Moon

    NASA Technical Reports Server (NTRS)

    Blewett, D. T.; Warell, J.

    2003-01-01

    Spectroscopic observations (400-670 nm) of Mercury were made at La Palma with the Nordic Optical Telescope (NOT) in June and July of 2002. Extensive observations of solar analog standard stars and validation spectra of 7 Iris and a variety of locations on the Moon were also collected. The 2002 Mercury data were also combined with previous observations (520-970 nm) from the Swedish Solar Vacuum Telescope (SVST). A spectrum (400-970 nm) calibrated to standard bidirectional geometry (alpha=i=30deg, e=0deg) was constructed based on the spectral slopes from 2002. The combined spectrum permits analysis with the Lucey lunar abundance relations for FeO and TiO2.

  8. The Sloan Digital Sky Survey Quasar Catalog: Fourteenth data release

    NASA Astrophysics Data System (ADS)

    Pâris, Isabelle; Petitjean, Patrick; Aubourg, Éric; Myers, Adam D.; Streblyanska, Alina; Lyke, Brad W.; Anderson, Scott F.; Armengaud, Éric; Bautista, Julian; Blanton, Michael R.; Blomqvist, Michael; Brinkmann, Jonathan; Brownstein, Joel R.; Brandt, William Nielsen; Burtin, Étienne; Dawson, Kyle; de la Torre, Sylvain; Georgakakis, Antonis; Gil-Marín, Héctor; Green, Paul J.; Hall, Patrick B.; Kneib, Jean-Paul; LaMassa, Stephanie M.; Le Goff, Jean-Marc; MacLeod, Chelsea; Mariappan, Vivek; McGreer, Ian D.; Merloni, Andrea; Noterdaeme, Pasquier; Palanque-Delabrouille, Nathalie; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Schneider, Donald P.; Seo, Hee-Jong; Tojeiro, Rita; Weaver, Benjamin A.; Weijmans, Anne-Marie; Yèche, Christophe; Zarrouk, Pauline; Zhao, Gong-Bo

    2018-05-01

    We present the data release 14 Quasar catalog (DR14Q) from the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV). This catalog includes all SDSS-IV/eBOSS objects that were spectroscopically targeted as quasar candidates and that are confirmed as quasars via a new automated procedure combined with a partial visual inspection of spectra, have luminosities Mi [z = 2] < -20.5 (in a Λ CDM cosmology with H0 = 70 km s-1 Mpc-1, Ω M =0.3, and Ω Λ = 0.7), and either display at least one emission line with a full width at half maximum larger than 500 km s-1 or, if not, have interesting/complex absorption features. The catalog also includes previously spectroscopically-confirmed quasars from SDSS-I, II, and III. The catalog contains 526 356 quasars (144 046 are new discoveries since the beginning of SDSS-IV) detected over 9376 deg2 (2044 deg2 having new spectroscopic data available) with robust identification and redshift measured by a combination of principal component eigenspectra. The catalog is estimated to have about 0.5% contamination. Redshifts are provided for the Mg II emission line. The catalog identifies 21 877 broad absorption line quasars and lists their characteristics. For each object, the catalog presents five-band (u, g, r, i, z) CCD-based photometry with typical accuracy of 0.03 mag. The catalog also contains X-ray, ultraviolet, near-infrared, and radio emission properties of the quasars, when available, from other large-area surveys. The calibrated digital spectra, covering the wavelength region 3610-10 140 Å at a spectral resolution in the range 1300 < R < 2500, can be retrieved from the SDSS Science Archiver Server. http://www.sdss.org/dr14/algorithms/qso_catalog

  9. Blooming Trees: Substructures and Surrounding Groups of Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Yu, Heng; Diaferio, Antonaldo; Serra, Ana Laura; Baldi, Marco

    2018-06-01

    We develop the Blooming Tree Algorithm, a new technique that uses spectroscopic redshift data alone to identify the substructures and the surrounding groups of galaxy clusters, along with their member galaxies. Based on the estimated binding energy of galaxy pairs, the algorithm builds a binary tree that hierarchically arranges all of the galaxies in the field of view. The algorithm searches for buds, corresponding to gravitational potential minima on the binary tree branches; for each bud, the algorithm combines the number of galaxies, their velocity dispersion, and their average pairwise distance into a parameter that discriminates between the buds that do not correspond to any substructure or group, and thus eventually die, and the buds that correspond to substructures and groups, and thus bloom into the identified structures. We test our new algorithm with a sample of 300 mock redshift surveys of clusters in different dynamical states; the clusters are extracted from a large cosmological N-body simulation of a ΛCDM model. We limit our analysis to substructures and surrounding groups identified in the simulation with mass larger than 1013 h ‑1 M ⊙. With mock redshift surveys with 200 galaxies within 6 h ‑1 Mpc from the cluster center, the technique recovers 80% of the real substructures and 60% of the surrounding groups; in 57% of the identified structures, at least 60% of the member galaxies of the substructures and groups belong to the same real structure. These results improve by roughly a factor of two the performance of the best substructure identification algorithm currently available, the σ plateau algorithm, and suggest that our Blooming Tree Algorithm can be an invaluable tool for detecting substructures of galaxy clusters and investigating their complex dynamics.

  10. Determination of the Maximum Temperature in a Non-Uniform Hot Zone by Line-of-Site Absorption Spectroscopy with a Single Diode Laser.

    PubMed

    Liger, Vladimir V; Mironenko, Vladimir R; Kuritsyn, Yurii A; Bolshov, Mikhail A

    2018-05-17

    A new algorithm for the estimation of the maximum temperature in a non-uniform hot zone by a sensor based on absorption spectrometry with a diode laser is developed. The algorithm is based on the fitting of the absorption spectrum with a test molecule in a non-uniform zone by linear combination of two single temperature spectra simulated using spectroscopic databases. The proposed algorithm allows one to better estimate the maximum temperature of a non-uniform zone and can be useful if only the maximum temperature rather than a precise temperature profile is of primary interest. The efficiency and specificity of the algorithm are demonstrated in numerical experiments and experimentally proven using an optical cell with two sections. Temperatures and water vapor concentrations could be independently regulated in both sections. The best fitting was found using a correlation technique. A distributed feedback (DFB) diode laser in the spectral range around 1.343 µm was used in the experiments. Because of the significant differences between the temperature dependences of the experimental and theoretical absorption spectra in the temperature range 300⁻1200 K, a database was constructed using experimentally detected single temperature spectra. Using the developed algorithm the maximum temperature in the two-section cell was estimated with accuracy better than 30 K.

  11. Application of artificial neural networks and genetic algorithms to modeling molecular electronic spectra in solution

    NASA Astrophysics Data System (ADS)

    Lilichenko, Mark; Kelley, Anne Myers

    2001-04-01

    A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.

  12. Digital processing of satellite imagery application to jungle areas of Peru

    NASA Technical Reports Server (NTRS)

    Pomalaza, J. C. (Principal Investigator); Pomalaza, C. A.; Espinoza, J.

    1976-01-01

    The author has identified the following significant results. The use of clustering methods permits the development of relatively fast classification algorithms that could be implemented in an inexpensive computer system with limited amount of memory. Analysis of CCTs using these techniques can provide a great deal of detail permitting the use of the maximum resolution of LANDSAT imagery. Potential cases were detected in which the use of other techniques for classification using a Gaussian approximation for the distribution functions can be used with advantage. For jungle areas, channels 5 and 7 can provide enough information to delineate drainage patterns, swamp and wet areas, and make a reasonable broad classification of forest types.

  13. SEVEN NEW BINARIES DISCOVERED IN THE KEPLER LIGHT CURVES THROUGH THE BEER METHOD CONFIRMED BY RADIAL-VELOCITY OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faigler, S.; Mazeh, T.; Tal-Or, L.

    We present seven newly discovered non-eclipsing short-period binary systems with low-mass companions, identified by the recently introduced BEER algorithm, applied to the publicly available 138-day photometric light curves obtained by the Kepler mission. The detection is based on the beaming effect (sometimes called Doppler boosting), which increases (decreases) the brightness of any light source approaching (receding from) the observer, enabling a prediction of the stellar Doppler radial-velocity (RV) modulation from its precise photometry. The BEER algorithm identifies the BEaming periodic modulation, with a combination of the well-known Ellipsoidal and Reflection/heating periodic effects, induced by short-period companions. The seven detections weremore » confirmed by spectroscopic RV follow-up observations, indicating minimum secondary masses in the range 0.07-0.4 M{sub Sun }. The binaries discovered establish for the first time the feasibility of the BEER algorithm as a new detection method for short-period non-eclipsing binaries, with the potential to detect in the near future non-transiting brown-dwarf secondaries, or even massive planets.« less

  14. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.

    1998-01-01

    The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.

  15. Optical spectroscopic characteristics of lactate and mitochondrion as new biomarkers in cancer diagnosis: understanding Warburg effect

    NASA Astrophysics Data System (ADS)

    Liu, C.-H.; Ni, X. H.; Pu, Yang; Yang, Y. L.; Zhou, F.; Zuzolo, R.; Wang, W. B.; Masilamani, V.; Rizwan, A.; Alfano, R. R.

    2012-01-01

    Cancer cells display high rates of glycolysis even under normoxia and mostly under hypoxia. Warburg proposed this effect of altered metabolism in cells more than 80 years ago. It is considered as a hallmark of cancer. Optical spectroscopy can be used to explore this effect. Pathophysiological studies indicate that mitochondria of cancer cells are enlarged and increased in number. Warburg observed that cancer cells tend to convert most glucose to lactate regardless of the presence of oxygen. Previous observations show increased lactate in breast cancer lines. The focus of this study is to investigate the relative content changes of lactate and mitochondria in human cancerous and normal breast tissue samples using optical spectroscopic techniques. The optical spectra were obtained from 30 cancerous and 25 normal breast tissue samples and five model components (Tryptophan, fat, collagen, lactate and mitochondrion) using fluorescence, Stokes shift and Raman spectroscopy. The basic biochemical component analysis model (BBCA) and a set of algorithm were used to analyze the spectra. Our analyses of fluorescence spectra showed a 14 percent increase in lactate content and 2.5 times increase in mitochondria number in cancerous breast tissue as compared with normal tissue. Our findings indicate that optical spectroscopic techniques may be used to understand Warburg effect. Lactate and mitochondrion content changes in tumors examined using optical spectroscopy may be used as a prognostic molecular marker in clinic applications.

  16. Prior-knowledge Fitting of Accelerated Five-dimensional Echo Planar J-resolved Spectroscopic Imaging: Effect of Nonlinear Reconstruction on Quantitation.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Thomas, M Albert

    2017-07-24

    1 H Magnetic Resonance Spectroscopic imaging (SI) is a powerful tool capable of investigating metabolism in vivo from mul- tiple regions. However, SI techniques are time consuming, and are therefore difficult to implement clinically. By applying non-uniform sampling (NUS) and compressed sensing (CS) reconstruction, it is possible to accelerate these scans while re- taining key spectral information. One recently developed method that utilizes this type of acceleration is the five-dimensional echo planar J-resolved spectroscopic imaging (5D EP-JRESI) sequence, which is capable of obtaining two-dimensional (2D) spectra from three spatial dimensions. The prior-knowledge fitting (ProFit) algorithm is typically used to quantify 2D spectra in vivo, however the effects of NUS and CS reconstruction on the quantitation results are unknown. This study utilized a simulated brain phantom to investigate the errors introduced through the acceleration methods. Errors (normalized root mean square error >15%) were found between metabolite concentrations after twelve-fold acceleration for several low concentra- tion (<2 mM) metabolites. The Cramér Rao lower bound% (CRLB%) values, which are typically used for quality control, were not reflective of the increased quantitation error arising from acceleration. Finally, occipital white (OWM) and gray (OGM) human brain matter were quantified in vivo using the 5D EP-JRESI sequence with eight-fold acceleration.

  17. Photoinduced relaxation dynamics of nitrogen-capped silicon nanoclusters: a TD-DFT study

    NASA Astrophysics Data System (ADS)

    Liu, Xiang-Yang; Xie, Xiao-Ying; Fang, Wei-Hai; Cui, Ganglong

    2018-04-01

    Herein we have developed and implemented a TD-DFT-based surface-hopping dynamics simulation method with a recently proposed numerical algorithm capable of efficiently computing nonadiabatic couplings, a semiclassical spectrum simulation method, and an excited-state character analysis method based on one-electron transition density matrix. With the use of these developed methods, we have studied the spectroscopic properties, excited-state characters, and photoinduced relaxation dynamics of three silicon nanoclusters capped with different chromophores (Cl@SiQD, Car@SiQD, Azo@SiQD). Spectroscopically, the main absorption peak is visibly red-shifted from Cl@SiQD via Car@SiQD to Azo@SiQD. In contrast to Cl@SiQD and Car@SiQD, there are two peaks observed in Azo@SiQD. Mechanistically, the excited-state relaxation to the lowest S1 excited singlet state is ultrafast in Cl@SiQD, which is less than 190 fs and without involving excited-state trapping. In comparison, there are clear excited-state trappings in Car@SiQD and Azo@SiQD. In the former, the S2 state is trapped more than 300 fs; in the latter, the S3 excited-state trapping is more than 615 fs. These results demonstrate that the interfacial interaction has significant influences on the spectroscopic properties and excited-state relaxation dynamics. The knowledge gained in this work could be helpful for the design of silicon nanoclusters with better photoluminescence performance.

  18. Determining Gender by Raman Spectroscopy of a Bloodstain.

    PubMed

    Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K

    2017-02-07

    The development of novel methods for forensic science is a constantly growing area of modern analytical chemistry. Raman spectroscopy is one of a few analytical techniques capable of nondestructive and nearly instantaneous analysis of a wide variety of forensic evidence, including body fluid stains, at the scene of a crime. In this proof-of-concept study, Raman microspectroscopy was utilized for gender identification based on dry bloodstains. Raman spectra were acquired in mapping mode from multiple spots on a bloodstain to account for intrinsic sample heterogeneity. The obtained Raman spectroscopic data showed highly similar spectroscopic features for female and male blood samples. Nevertheless, support vector machines (SVM) and artificial neuron network (ANN) statistical methods applied to the spectroscopic data allowed for differentiating between male and female bloodstains with high confidence. More specifically, the statistical approach based on a genetic algorithm (GA) coupled with an ANN classification showed approximately 98% gender differentiation accuracy for individual bloodstains. These results demonstrate the great potential of the developed method for forensic applications, although more work is needed for method validation. When this method is fully developed, a portable Raman instrument could be used for the infield identification of traces of body fluids and to obtain phenotypic information about the donor, including gender and race, as well as for the analysis of a variety of other types of forensic evidence.

  19. Improving serum calcium test ordering according to a decision algorithm.

    PubMed

    Faria, Daniel K; Taniguchi, Leandro U; Fonseca, Luiz A M; Ferreira-Junior, Mario; Aguiar, Francisco J B; Lichtenstein, Arnaldo; Sumita, Nairo M; Duarte, Alberto J S; Sales, Maria M

    2018-05-18

    To detect differences in the pattern of serum calcium tests ordering before and after the implementation of a decision algorithm. We studied patients admitted to an internal medicine ward of a university hospital on April 2013 and April 2016. Patients were classified as critical or non-critical on the day when each test was performed. Adequacy of ordering was defined according to adherence to a decision algorithm implemented in 2014. Total and ionised calcium tests per patient-day of hospitalisation significantly decreased after the algorithm implementation; and duplication of tests (total and ionised calcium measured in the same blood sample) was reduced by 49%. Overall adequacy of ionised calcium determinations increased by 23% (P=0.0001) due to the increase in the adequacy of ionised calcium ordering in non-critical conditions. A decision algorithm can be a useful educational tool to improve adequacy of the process of ordering serum calcium tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  1. The Data Reduction Pipeline for The SDSS-IV Manga IFU Galaxy Survey

    DOE PAGES

    Law, David R.; Cherinka, Brian; Yan, Renbin; ...

    2016-09-12

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  2. THE DATA REDUCTION PIPELINE FOR THE SDSS-IV MaNGA IFU GALAXY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, David R.; Cherinka, Brian; Yan, Renbin

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622–10354 Å and an average footprint of ∼500 arcsec{sup 2} per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ∼100 million raw-frame spectra and ∼10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ∼8500 Å and reach a typical 10 σ limiting continuum surface brightness μ  = 23.5 AB arcsec{sup −2} in a five-arcsecond-diameter aperture in the g -band. The wavelength calibration of the MaNGA data is accurate to 5 km s{sup −1} rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ  = 72 km s{sup −1}.« less

  3. Optimal estimation retrievals of the atmospheric structure and composition of HD 189733b from secondary eclipse spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, J.-M.; Fletcher, L. N.; Irwin, P. G. J.

    2012-02-01

    Recent spectroscopic observations of transiting hot Jupiters have permitted the derivation of the thermal structure and molecular abundances of H2O, CO2, CO and CH4 in these extreme atmospheres. Here, for the first time, we apply the technique of optimal estimation to determine the thermal structure and composition of an exoplanet by solving the inverse problem. The development of a suite of radiative transfer and retrieval tools for exoplanet atmospheres is described, building upon a retrieval algorithm which is extensively used in the study of our own Solar system. First, we discuss the plausibility of detection of different molecules in the dayside atmosphere of HD 189733b and the best-fitting spectrum retrieved from all publicly available sets of secondary eclipse observations between 1.45 and 24 μm. Additionally, we use contribution functions to assess the vertical sensitivity of the emission spectrum to temperatures and molecular composition. Over the altitudes probed by the contribution functions, the retrieved thermal structure shows an isothermal upper atmosphere overlying a deeper adiabatic layer (temperature decreasing with altitude), which is consistent with previously reported dynamical and observational results. The formal uncertainties on retrieved parameters are estimated conservatively using an analysis of the cross-correlation functions and the degeneracy between different atmospheric properties. The formal solution of the inverse problem suggests that the uncertainties on retrieved parameters are larger than suggested in previous studies, and that the presence of CO and CH4 is only marginally supported by the available data. Nevertheless, by including as broad a wavelength range as possible in the retrieval, we demonstrate that available spectra of HD 189733b can constrain a family of potential solutions for the atmospheric structure.

  4. The Data Reduction Pipeline for the SDSS-IV MaNGA IFU Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Law, David R.; Cherinka, Brian; Yan, Renbin; Andrews, Brett H.; Bershady, Matthew A.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Bolton, Adam S.; Brownstein, Joel R.; Bundy, Kevin; Chen, Yanmei; Drory, Niv; D'Souza, Richard; Fu, Hai; Jones, Amy; Kauffmann, Guinevere; MacDonald, Nicholas; Masters, Karen L.; Newman, Jeffrey A.; Parejko, John K.; Sánchez-Gallego, José R.; Sánchez, Sebastian F.; Schlegel, David J.; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Zhang, Kai

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 Å and an average footprint of ˜500 arcsec2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ˜100 million raw-frame spectra and ˜10 million reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ˜8500 Å and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec-2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s-1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s-1.

  5. Pseudo-time algorithms for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1986-01-01

    A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.

  6. Noniterative estimation of a nonlinear parameter

    NASA Technical Reports Server (NTRS)

    Bergstroem, A.

    1973-01-01

    An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.

  7. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  8. Computer simulation of a pilot in V/STOL aircraft control loops

    NASA Technical Reports Server (NTRS)

    Vogt, William G.; Mickle, Marlin H.; Zipf, Mark E.; Kucuk, Senol

    1989-01-01

    The objective was to develop a computerized adaptive pilot model for the computer model of the research aircraft, the Harrier II AV-8B V/STOL with special emphasis on propulsion control. In fact, two versions of the adaptive pilot are given. The first, simply called the Adaptive Control Model (ACM) of a pilot includes a parameter estimation algorithm for the parameters of the aircraft and an adaption scheme based on the root locus of the poles of the pilot controlled aircraft. The second, called the Optimal Control Model of the pilot (OCM), includes an adaption algorithm and an optimal control algorithm. These computer simulations were developed as a part of the ongoing research program in pilot model simulation supported by NASA Lewis from April 1, 1985 to August 30, 1986 under NASA Grant NAG 3-606 and from September 1, 1986 through November 30, 1988 under NASA Grant NAG 3-729. Once installed, these pilot models permitted the computer simulation of the pilot model to close all of the control loops normally closed by a pilot actually manipulating the control variables. The current version of this has permitted a baseline comparison of various qualitative and quantitative performance indices for propulsion control, the control loops and the work load on the pilot. Actual data for an aircraft flown by a human pilot furnished by NASA was compared to the outputs furnished by the computerized pilot and found to be favorable.

  9. Automatic localization of landmark sets in head CT images with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2016-03-01

    Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.

  10. Diagnosing added value of convection-permitting regional models using precipitation event identification and tracking

    NASA Astrophysics Data System (ADS)

    Chang, W.; Wang, J.; Marohnic, J.; Kotamarthi, V. R.; Moyer, E. J.

    2017-12-01

    We use a novel rainstorm identification and tracking algorithm (Chang et al 2016) to evaluate the effects of using resolved convection on improving how faithfully high-resolution regional simulations capture precipitation characteristics. The identification and tracking algorithm allocates all precipitation to individual rainstorms, including low-intensity events with complicated features, and allows us to decompose changes or biases in total mean precipitation into their causes: event size, intensity, number, and duration. It allows lower threshold for tracking so captures nearly all rainfall and improves tracking, so that events that are clearly meteorologically related are tracked across lifespans up to days. We evaluate a series of dynamically downscaled simulations of the summertime United States at 12 and 4 km under different model configurations, and find that resolved convection offers the largest gains in reducing biases in precipitation characteristics, especially in event size. Simulations with parametrized convection produce event sizes 80-220% too large in extent; with resolved convection the bias is reduced to 30%. The identification and tracking algorithm also allows us to demonstrate that the diurnal cycle in rainfall stems not from temporal variation in the production of new events but from diurnal fluctuations in rainfall from existing events. We show further hat model errors in the diurnal cycle biases are best represented as additive offsets that differ by time of day, and again that convection-permitting simulations are most efficient in reducing these additive biases.

  11. Density‐weighted concentric circle trajectories for high resolution brain magnetic resonance spectroscopic imaging at 7T

    PubMed Central

    Hingerl, Lukas; Moser, Philipp; Považan, Michal; Hangel, Gilbert; Heckova, Eva; Gruber, Stephan; Trattnig, Siegfried; Strasser, Bernhard

    2017-01-01

    Purpose Full‐slice magnetic resonance spectroscopic imaging at ≥7 T is especially vulnerable to lipid contaminations arising from regions close to the skull. This contamination can be mitigated by improving the point spread function via higher spatial resolution sampling and k‐space filtering, but this prolongs scan times and reduces the signal‐to‐noise ratio (SNR) efficiency. Currently applied parallel imaging methods accelerate magnetic resonance spectroscopic imaging scans at 7T, but increase lipid artifacts and lower SNR‐efficiency further. In this study, we propose an SNR‐efficient spatial‐spectral sampling scheme using concentric circle echo planar trajectories (CONCEPT), which was adapted to intrinsically acquire a Hamming‐weighted k‐space, thus termed density‐weighted‐CONCEPT. This minimizes voxel bleeding, while preserving an optimal SNR. Theory and Methods Trajectories were theoretically derived and verified in phantoms as well as in the human brain via measurements of five volunteers (single‐slice, field‐of‐view 220 × 220 mm2, matrix 64 × 64, scan time 6 min) with free induction decay magnetic resonance spectroscopic imaging. Density‐weighted‐CONCEPT was compared to (a) the originally proposed CONCEPT with equidistant circles (here termed e‐CONCEPT), (b) elliptical phase‐encoding, and (c) 5‐fold Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase‐encoding. Results By intrinsically sampling a Hamming‐weighted k‐space, density‐weighted‐CONCEPT removed Gibbs‐ringing artifacts and had in vivo +9.5%, +24.4%, and +39.7% higher SNR than e‐CONCEPT, elliptical phase‐encoding, and the Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase‐encoding (all P < 0.05), respectively, which lead to improved metabolic maps. Conclusion Density‐weighted‐CONCEPT provides clinically attractive full‐slice high‐resolution magnetic resonance spectroscopic imaging with optimal SNR at 7T. Magn Reson Med 79:2874–2885, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29106742

  12. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishonio, D.; Heyman, J. S.

    1985-01-01

    A numerical algorithm is described that enables the correction of energy shadowing during the ultrasonic testing of bulk materials. In the conventional method, an ultrasonic transducer transmits sound waves into a material that is immersed in water so that discontinuities such as defects can be revealed when the waves are reflected and then detected and displayed graphically. Since a defect that lies behind another defect is shadowed in that it receives less energy, the conventional method has a major drawback. The algorithm normalizes the energy of the incoming wave by measuring the energy of the waves reflected off the water/air interface. The algorithm is fast and simple enough to be adopted for real time applications in industry. Images of material defects with the shadowing corrections permit more quantitative interpretation of the material state.

  13. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  14. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  15. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  16. FFT applications to plane-polar near-field antenna measurements

    NASA Technical Reports Server (NTRS)

    Gatti, Mark S.; Rahmat-Samii, Yahya

    1988-01-01

    The four-point bivariate Lagrange interpolation algorithm was applied to near-field antenna data measured in a plane-polar facility. The results were sufficiently accurate to permit the use of the FFT (fast Fourier transform) algorithm to calculate the far-field patterns of the antenna. Good agreement was obtained between the far-field patterns as calculated by the Jacobi-Bessel and the FFT algorithms. The significant advantage in using the FFT is in the calculation of the principal plane cuts, which may be made very quickly. Also, the application of the FFT algorithm directly to the near-field data was used to perform surface holographic diagnosis of a reflector antenna. The effects due to the focusing of the emergent beam from the reflector, as well as the effects of the information in the wide-angle regions, are shown. The use of the plane-polar near-field antenna test range has therfore been expanded to include these useful FFT applications.

  17. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  18. An Aircraft Separation Algorithm with Feedback and Perturbation

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    A separation algorithm is a set of rules that tell aircraft how to maneuver in order to maintain a minimum distance between them. This paper investigates demonstrating that separation algorithms satisfy the FAA requirement for the occurrence of incidents by means of simulation. Any demonstration that a separation algorithm, or any other aspect of flight, satisfies the FAA requirement is a challenge because of the stringent nature of the requirement and the complexity of airspace operations. The paper begins with a probability and statistical analysis of both the FAA requirement and demonstrating meeting it by a Monte Carlo approach. It considers the geometry of maintaining separation when one plane must change its flight path. It then develops a simple feedback control law that guides the planes on their paths. The presence of feedback control permits the introduction of perturbations, and the stochastic nature of the chosen perturbation is examined. The simulation program is described. This paper is an early effort in the realistic demonstration of a stringent requirement. Much remains to be done.

  19. A Two-Dimensional Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  20. DES Science Portal: Computing Photometric Redshifts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gschwend, Julia

    An important challenge facing photometric surveys for cosmological purposes, such as the Dark Energy Survey (DES), is the need to produce reliable photometric redshifts (photo-z). The choice of adequate algorithms and configurations and the maintenance of an up-to-date spectroscopic database to build training sets, for example, are challenging tasks when dealing with large amounts of data that are regularly updated and constantly growing. In this paper, we present the first of a series of tools developed by DES, provided as part of the DES Science Portal, an integrated web-based data portal developed to facilitate the scientific analysis of the data,more » while ensuring the reproducibility of the analysis. We present the DES Science Portal photometric redshift tools, starting from the creation of a spectroscopic sample to training the neural network photo-z codes, to the final estimation of photo-zs for a large photometric catalog. We illustrate this operation by calculating well calibrated photo-zs for a galaxy sample extracted from the DES first year (Y1A1) data. The series of processes mentioned above is run entirely within the Portal environment, which automatically produces validation metrics, and maintains the provenance between the different steps. This system allows us to fine tune the many steps involved in the process of calculating photo-zs, making sure that we do not lose the information on the configurations and inputs of the previous processes. By matching the DES Y1A1 photometry to a spectroscopic sample, we define different training sets that we use to feed the photo-z algorithms already installed at the Portal. Finally, we validate the results under several conditions, including the case of a sample limited to i<22.5 with the color properties close to the full DES Y1A1 photometric data. This way we compare the performance of multiple methods and training configurations. The infrastructure presented here is an effcient way to test several methods of calculating photo-zs and use them to create different catalogs for portal science workflows« less

  1. Raman Spectroscopic Analysis of Fingernail Clippings Can Help Differentiate Between Postmenopausal Women Who Have and Have Not Suffered a Fracture

    PubMed Central

    Beattie, James R.; Cummins, Niamh M.; Caraher, Clare; O’Driscoll, Olive M.; Bansal, Aruna T.; Eastell, Richard; Ralston, Stuart H.; Stone, Michael D.; Pearson, Gill; Towler, Mark R.

    2016-01-01

    Raman spectroscopy was applied to nail clippings from 633 postmenopausal British and Irish women, from six clinical sites, of whom 42% had experienced a fragility fracture. The objective was to build a prediction algorithm for fracture using data from four sites (known as the calibration set) and test its performance using data from the other two sites (known as the validation set). Results from the validation set showed that a novel algorithm, combining spectroscopy data with clinical data, provided area under the curve (AUC) of 74% compared to an AUC of 60% from a reduced QFracture score (a clinically accepted risk calculator) and 61% from the dual-energy X-ray absorptiometry T-score, which is in current use for the diagnosis of osteoporosis. Raman spectroscopy should be investigated further as a noninvasive tool for the early detection of enhanced risk of fragility fracture. PMID:27429561

  2. An extension to the Chahine method of inverting the radiative transfer equation. [application to ozone distribution in atmosphere

    NASA Technical Reports Server (NTRS)

    Twomey, S.; Herman, B.; Rabinoff, R.

    1977-01-01

    An extension of the Chahine relaxation method (1970) for inverting the radiative transfer equation is presented. This method is superior to the original method in that it takes into account in a realistic manner the shape of the kernel function, and its extension to nonlinear systems is much more straightforward. A comparison of the new method with a matrix method due to Twomey (1965), in a problem involving inference of vertical distribution of ozone from spectroscopic measurements in the near ultraviolet, indicates that in this situation this method is stable with errors in the input data up to 4%, whereas the matrix method breaks down at these levels. The problem of non-uniqueness of the solution, which is a property of the system of equations rather than of any particular algorithm for solving them, remains, although it takes on slightly different forms for the two algorithms.

  3. Spatial Data Structures for Robotic Vehicle Route Planning

    DTIC Science & Technology

    1988-12-01

    goal will be realized in an intelligent Spatial Data Structure Development System (SDSDS) intended for use by Terrain Analysis applications...from the user the details of representation and to permit the infrastructure itself to decide which representations will be most efficient or effective ...to intelligently predict performance of algorithmic sequences and thereby optimize the application (within the accuracy of the prediction models). The

  4. An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.

  5. Effectiveness and cost-effectiveness of a cardiovascular risk prediction algorithm for people with severe mental illness (PRIMROSE).

    PubMed

    Zomer, Ella; Osborn, David; Nazareth, Irwin; Blackburn, Ruth; Burton, Alexandra; Hardoon, Sarah; Holt, Richard Ian Gregory; King, Michael; Marston, Louise; Morris, Stephen; Omar, Rumana; Petersen, Irene; Walters, Kate; Hunter, Rachael Maree

    2017-09-05

    To determine the cost-effectiveness of two bespoke severe mental illness (SMI)-specific risk algorithms compared with standard risk algorithms for primary cardiovascular disease (CVD) prevention in those with SMI. Primary care setting in the UK. The analysis was from the National Health Service perspective. 1000 individuals with SMI from The Health Improvement Network Database, aged 30-74 years and without existing CVD, populated the model. Four cardiovascular risk algorithms were assessed: (1) general population lipid, (2) general population body mass index (BMI), (3) SMI-specific lipid and (4) SMI-specific BMI, compared against no algorithm. At baseline, each cardiovascular risk algorithm was applied and those considered high risk ( > 10%) were assumed to be prescribed statin therapy while others received usual care. Quality-adjusted life years (QALYs) and costs were accrued for each algorithm including no algorithm, and cost-effectiveness was calculated using the net monetary benefit (NMB) approach. Deterministic and probabilistic sensitivity analyses were performed to test assumptions made and uncertainty around parameter estimates. The SMI-specific BMI algorithm had the highest NMB resulting in 15 additional QALYs and a cost saving of approximately £53 000 per 1000 patients with SMI over 10 years, followed by the general population lipid algorithm (13 additional QALYs and a cost saving of £46 000). The general population lipid and SMI-specific BMI algorithms performed equally well. The ease and acceptability of use of an SMI-specific BMI algorithm (blood tests not required) makes it an attractive algorithm to implement in clinical settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Handling Different Spatial Resolutions in Image Fusion by Multivariate Curve Resolution-Alternating Least Squares for Incomplete Image Multisets.

    PubMed

    Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna

    2018-06-05

    Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.

  7. Improved method to visualize lipid distribution within arterial vessel walls by 1.7 μm spectroscopic spectral-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hirano, Mitsuharu; Tonosaki, Shozo; Ueno, Takahiro; Tanaka, Masato; Hasegawa, Takemi

    2014-02-01

    We report an improved method to visualize lipid distribution in axial and lateral direction within arterial vessel walls by spectroscopic spectral-domain Optical Coherence Tomography (OCT) at 1.7μm wavelength for identification of lipidrich plaque that is suspected to cause coronary events. In our previous method, an extended InGaAs-based line camera detects an OCT interferometric spectrum from 1607 to 1766 nm, which is then divided into twenty subbands, and A-scan OCT profile is calculated for each subband, resulting in a tomographic spectrum. This tomographic spectrum is decomposed into lipid spectrum having an attenuation peak at 1730 nm and non-lipid spectrum independent of wavelength, and the weight of each spectrum, that is, lipid and non-lipid score is calculated. In this paper, we present an improved algorithm, in which we have combined the lipid score and the non-lipid score to derive a corrected lipid score. We have found that the corrected lipid score is better than the raw lipid score in that the former is more robust against false positive occurring due to abrupt change in reflectivity at vessel surface. In addition, we have optimized spatial smoothing filter and reduced false positive and false negative due to detection noise and speckle. We have verified this improved algorithm by the use of measuring data of normal porcine coronary artery and lard as a model of lipid-rich plaque and confirmed that both the sensitivity and the specificity of lard are 92%.

  8. FITspec: A New Algorithm for the Automated Fit of Synthetic Stellar Spectra for OB Stars

    NASA Astrophysics Data System (ADS)

    Fierro-Santillán, Celia R.; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago A.; Arrieta, Anabel; Arias, Lorena; Sigalotti, Leonardo Di G.

    2018-06-01

    In this paper we describe the FITspec code, a data mining tool for the automatic fitting of synthetic stellar spectra. The program uses a database of 27,000 CMFGEN models of stellar atmospheres arranged in a six-dimensional (6D) space, where each dimension corresponds to one model parameter. From these models a library of 2,835,000 synthetic spectra were generated covering the ultraviolet, optical, and infrared regions of the electromagnetic spectrum. Using FITspec we adjust the effective temperature and the surface gravity. From the 6D array we also get the luminosity, the metallicity, and three parameters for the stellar wind: the terminal velocity ({v}∞ ), the β exponent of the velocity law, and the clumping filling factor (F cl). Finally, the projected rotational velocity (v\\cdot \\sin i) can be obtained from the library of stellar spectra. Validation of the algorithm was performed by analyzing the spectra of a sample of eight O-type stars taken from the IACOB spectroscopic survey of Northern Galactic OB stars. The spectral lines used for the adjustment of the analyzed stars are reproduced with good accuracy. In particular, the effective temperatures calculated with the FITspec are in good agreement with those derived from spectral type and other calibrations for the same stars. The stellar luminosities and projected rotational velocities are also in good agreement with previous quantitative spectroscopic analyses in the literature. An important advantage of FITspec over traditional codes is that the time required for spectral analyses is reduced from months to a few hours.

  9. An approach to the analysis of SDSS spectroscopic outliers based on self-organizing maps. Designing the outlier analysis software package for the next Gaia survey

    NASA Astrophysics Data System (ADS)

    Fustes, D.; Manteiga, M.; Dafonte, C.; Arcay, B.; Ulla, A.; Smith, K.; Borrachero, R.; Sordo, R.

    2013-11-01

    Aims: A new method applied to the segmentation and further analysis of the outliers resulting from the classification of astronomical objects in large databases is discussed. The method is being used in the framework of the Gaia satellite Data Processing and Analysis Consortium (DPAC) activities to prepare automated software tools that will be used to derive basic astrophysical information that is to be included in final Gaia archive. Methods: Our algorithm has been tested by means of simulated Gaia spectrophotometry, which is based on SDSS observations and theoretical spectral libraries covering a wide sample of astronomical objects. Self-organizing maps networks are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Results: We demonstrate the usefulness of the method by analyzing the spectra that were rejected by the SDSS spectroscopic classification pipeline and thus classified as "UNKNOWN". First, our method can help distinguish between astrophysical objects and instrumental artifacts. Additionally, the application of our algorithm to SDSS objects of unknown nature has allowed us to identify classes of objects with similar astrophysical natures. In addition, the method allows for the potential discovery of hundreds of new objects, such as white dwarfs and quasars. Therefore, the proposed method is shown to be very promising for data exploration and knowledge discovery in very large astronomical databases, such as the archive from the upcoming Gaia mission.

  10. Population inversions in ablation plasmas generated by intense electron beams. Final report, 1 November 1985-31 October 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilgenbach, R.M.; Kammash, T.; Brake, M.L.

    1988-11-01

    Experiments during the past three years have concerned the generation and spectroscopic study of electron beam-driven carbon plasmas in order to explore the production of optical and ultraviolet radiation from nonequilibrium populations. The output of MELBA (Michigan Electron Long Beam Accelerator), has been connected to an electron-beam diode consisting of an aluminum (or brass) cathode stalk and a carbon anode. Magnetic-field coils were designed, procured, and utilized to focus the electron beam. A side viewing port permitted spectroscopic diagnostics to view across the surface of the anode. Spectroscopic diagnosis was performed using a 1-m spectrograph capable of operation from themore » vacuum-ultraviolet through the visible. This spectrograph is coupled to a 1024-channel optical multichannel analyzer. Spectra taken during the initial 400-ns period of the e-beam pulse showed a low effective-charge plasma with primarily molecular components (C/sub 2/, CH) as well as atomic hydrogen and singly ionized carbon (CII). When the generator pulse was crowbarred after the first 400 ns, the spectra revealed a continuation of the low-charge-state plasma. At times greater than 400 ns in non-crowbarred shots, the spectra revealed a highly ionized plasma with a very large intensity line at 2530 Angstroms due to CIV (5g-4f), and lower-intensity lines due to CIII and CII. This CIV line emission increased with time, peaking sharply between 750 and 900 ns, and decayed rapidly in less than 100 ns. Emission from these high ionization states may be due to electron beam-plasma instabilities, as this emission was accompanied by high levels of radio frequency and microwave emission.« less

  11. Laser diagnostics for combustion temperature and species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckbreth, A.C.

    1987-01-01

    Laser approaches to combustion diagnostics are of considerable interest due to their remote, nonintrusive and in-situ character, unlimited temperature capability and potential for simultaneous temporal and spatial resolution, This book aims to make these powerful and important new tools in combustion research understandable. The focus of this text is on spectroscopically-based, spatially-precise laser techniques for temperature and chemical composition measurements in reacting and nonreacting flows. After introductory chapters providing a fundamental theoretical and experimental background, attention is directed to diagnostics based upon spontaneous Raman and Rayleigh scattering, coherent anti-Stokes Raman spectroscopy (CARS) and laser-induced fluorescence (LIFS). The book concludes withmore » a treatment of techniques which permit spatially-resolved measurements over an entire two-dimensional field simultaneously.« less

  12. X-Ray Emission from Ultraviolet Luminous Galaxies and Lyman Break Galaxies

    NASA Technical Reports Server (NTRS)

    Hornschemeier, Ann; Ptak, A. F.; Salim, S.; Heckman, T. P.; Overzier, R.; Mallery, R.; Rich, M.; Strickland, D.; Grimes, J.

    2009-01-01

    We present results from an XMM mini-survey of GALEX-selected Ultraviolet-Luminous Galaxies (UVLGs) that appear to include an interesting subset that are analogs to the distant (3

  13. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  14. Time-resolved XAFS spectroscopic studies of B-H and N-H oxidative addition to transition metal catalysts relevant to hydrogen storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitterwolf, Thomas E.

    2014-12-09

    Successful catalytic dehydrogenation of aminoborane, H 3NBH 3, prompted questions as to the potential role of N-H oxidative addition in the mechanisms of these processes. N-H oxidative addition reactions are rare, and in all cases appear to involve initial dative bonding to the metal by the amine lone pairs followed by transfer of a proton to the basic metal. Aminoborane and its trimethylborane derivative block this mechanism and, in principle, should permit authentic N-H oxidative attrition to occur. Extensive experimental work failed to confirm this hypothesis. In all cases either B-H complexation or oxidative addition of solvent C-H bonds dominatemore » the chemistry.« less

  15. Progress on laser technology for proposed space-based sodium lidar

    NASA Astrophysics Data System (ADS)

    Krainak, Michael A.; Yu, Anthony W.; Li, Steven X.; Bai, Yingxin; Numata, Kenji; Chen, Jeffrey R.; Fahey, Molly E.; Micalizzi, Frankie; Konoplev, Oleg A.; Janches, Diego; Gardner, Chester S.; Allan, Graham R.

    2018-02-01

    We propose a nadir-pointing space-based Na Doppler resonance fluorescence LIDAR on board of the International Space Station (ISS). The science instrument goal is temperature and vertical wind measurements of the Earth Mesosphere Lower Thermosphere (MLT) 75-115 km region using atomic sodium as a tracer. Our instrument concept uses a high-energy laser transmitter at 589 nm and highly sensitive photon counting detectors that permit range-resolved atmospheric-sodium-temperature profiles. The atmospheric temperature is deduced from the linewidth of the resonant fluorescence from the atomic sodium vapor D2 line as measured by our tunable laser. We are pursuing high power laser architectures that permit limited day time sodium lidar observations with the help of a narrow bandpass etalon filter. We discuss technology, prototypes, risks and trades for two 589 nm wavelength laser architectures: 1) Raman laser 2) Sum Frequency Generation. Laser-induced saturation of atomic sodium in the MLT region affects both sodium density and temperature measurements. We discuss the saturation impact on the laser parameters, laser architecture and instrument trades. Off-nadir pointing from the ISS causes Doppler shifts that effect the sodium spectroscopy. We discuss laser wavelength locking, tuning and spectroscopic-line sampling strategy.

  16. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  17. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  18. Mixed-initiative control of intelligent systems

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1987-01-01

    Mixed-initiative user interfaces provide a means by which a human operator and an intelligent system may collectively share the task of deciding what to do next. Such interfaces are important to the effective utilization of real-time expert systems as assistants in the execution of critical tasks. Presented here is the Incremental Inference algorithm, a symbolic reasoning mechanism based on propositional logic and suited to the construction of mixed-initiative interfaces. The algorithm is similar in some respects to the Truth Maintenance System, but replaces the notion of 'justifications' with a notion of recency, allowing newer values to override older values yet permitting various interested parties to refresh these values as they become older and thus more vulnerable to change. A simple example is given of the use of the Incremental Inference algorithm plus an overview of the integration of this mechanism within the SPECTRUM expert system for geological interpretation of imaging spectrometer data.

  19. Estimation of distances to stars with stellar parameters from LAMOST

    DOE PAGES

    Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo; ...

    2015-06-05

    Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less

  20. Estimation of distances to stars with stellar parameters from LAMOST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo

    Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less

  1. Investigation of the electronic structure of Be2+He and Be+He, and static dipole polarisabilities of the helium atom

    NASA Astrophysics Data System (ADS)

    Dhiflaoui, J.; Bejaoui, M.; Farjallah, M.; Berriche, H.

    2018-05-01

    The potential energy and spectroscopic constants of the ground and many excited states of the Be+He van der Waals system have been investigated using a one-electron pseudo-potential approach, which is used to replace the effect of the Be2+ core and the electron-He interactions by effective potentials. Furthermore, the core-core interactions are incorporated. This permits the reduction of the number of active electrons of the Be+He van der Waals system to only one electron. Therefore, the potential energy of the ground state as well as the excited states is performed at the SCF level and considering the spin-orbit interaction. The core-core interaction for Be2+He ground state is included using accurate CCSD (T) calculations. Then, the spectroscopic properties of the Be+He electronic states are extracted and compared with the previous theoretical and experimental studies. This comparison has shown a very good agreement for the ground and the first excited states. Moreover, the transition dipole moment has been determined for a large and dense grid of internuclear distances including the spin orbit effect. In addition, a vibrational spacing analysis for the Be2+He and Be+He ground states is performed to extract the He atomic polarisability.

  2. Variable magnification variable dispersion glancing incidence imaging x-ray spectroscopic telescope

    NASA Technical Reports Server (NTRS)

    Hoover, Richard B. (Inventor)

    1991-01-01

    A variable magnification variable dispersion glancing incidence x-ray spectroscopic telescope capable of multiple high spatial revolution imaging at precise spectral lines of solar and stellar x-ray and extreme ultraviolet radiation sources includes a pirmary optical system which focuses the incoming radiation to a primary focus. Two or more rotatable carries each providing a different magnification are positioned behind the primary focus at an inclination to the optical axis, each carrier carrying a series of ellipsoidal diffraction grating mirrors each having a concave surface on which the gratings are ruled and coated with a mutlilayer coating to reflect by diffraction a different desired wavelength. The diffraction grating mirrors of both carriers are segments of ellipsoids having a common first focus coincident with the primary focus. A contoured detector such as an x-ray sensitive photogrpahic film is positioned at the second respective focus of each diffraction grating so that each grating may reflect the image at the first focus to the detector at the second focus. The carriers are selectively rotated to position a selected mirror for receiving radiation from the primary optical system, and at least the first carrier may be withdrawn from the path of the radiation to permit a selected grating on the second carrier to receive radiation.

  3. Variable magnification variable dispersion glancing incidence imaging x ray spectroscopic telescope

    NASA Technical Reports Server (NTRS)

    Hoover, Richard (Inventor)

    1990-01-01

    A variable magnification variable dispersion glancing incidence x ray spectroscopic telescope capable of multiple high spatial revolution imaging at precise spectral lines of solar and stellar x ray and extreme ultraviolet radiation sources includes a primary optical system which focuses the incoming radiation to a primary focus. Two or more rotatable carriers each providing a different magnification are positioned behind the primary focus at an inclination to the optical axis, each carrier carrying a series of ellipsoidal diffraction grating mirrors each having a concave surface on which the gratings are ruled and coated with a multilayer coating to reflect by diffraction a different desired wavelength. The diffraction grating mirrors of both carriers are segments of ellipsoids having a common first focus coincident with the primary focus. A contoured detector such as an x ray sensitive photographic film is positioned at the second respective focus of each diffraction grating so that each grating may reflect the image at the first focus to the detector at the second focus. The carriers are selectively rotated to position a selected mirror for receiving radiation from the primary optical system, and at least the first carrier may be withdrawn from the path of the radiation to permit a selected grating on the second carrier to receive radiation.

  4. Saturation: An efficient iteration strategy for symbolic state-space generation

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Luettgen, Gerald; Siminiceanu, Radu; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents a novel algorithm for generating state spaces of asynchronous systems using Multi-valued Decision Diagrams. In contrast to related work, the next-state function of a system is not encoded as a single Boolean function, but as cross-products of integer functions. This permits the application of various iteration strategies to build a system's state space. In particular, this paper introduces a new elegant strategy, called saturation, and implements it in the tool SMART. On top of usually performing several orders of magnitude faster than existing BDD-based state-space generators, the algorithm's required peak memory is often close to the nal memory needed for storing the overall state spaces.

  5. Path planning algorithms for assembly sequence planning. [in robot kinematics

    NASA Technical Reports Server (NTRS)

    Krishnan, S. S.; Sanderson, Arthur C.

    1991-01-01

    Planning for manipulation in complex environments often requires reasoning about the geometric and mechanical constraints which are posed by the task. In planning assembly operations, the automatic generation of operations sequences depends on the geometric feasibility of paths which permit parts to be joined into subassemblies. Feasible locations and collision-free paths must be present for part motions, robot and grasping motions, and fixtures. This paper describes an approach to reasoning about the feasibility of straight-line paths among three-dimensional polyhedral parts using an algebra of polyhedral cones. A second method recasts the feasibility conditions as constraints in a nonlinear optimization framework. Both algorithms have been implemented and results are presented.

  6. Underwater video enhancement using multi-camera super-resolution

    NASA Astrophysics Data System (ADS)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  7. Optimal space communications techniques. [discussion of video signals and delta modulation

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.

  8. Three-Dimensional Radiative Hydrodynamics for Disk Stability Simulations: A Proposed Testing Standard and New Results

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse

    2007-08-01

    Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.

  9. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  10. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  11. An innovative application of time-domain spectroscopy on localized surface plasmon resonance sensing

    NASA Astrophysics Data System (ADS)

    Li, Meng-Chi; Chang, Ying-Feng; Wang, Huai-Yi; Lin, Yu-Xen; Kuo, Chien-Cheng; Annie Ho, Ja-An; Lee, Cheng-Chung; Su, Li-Chen

    2017-03-01

    White-light scanning interferometry (WLSI) is often used to study the surface profiles and properties of thin films because the strength of the technique lies in its ability to provide fast and high resolution measurements. An innovative attempt is made in this paper to apply WLSI as a time-domain spectroscopic system for localized surface plasmon resonance (LSPR) sensing. A WLSI-based spectrometer is constructed with a breadboard of WLSI in combination with a spectral centroid algorithm for noise reduction and performance improvement. Experimentally, the WLSI-based spectrometer exhibits a limit of detection (LOD) of 1.2 × 10-3 refractive index units (RIU), which is better than that obtained with a conventional UV-Vis spectrometer, by resolving the LSPR peak shift. Finally, the bio-applicability of the proposed spectrometer was investigated using the rs242557 tau gene, an Alzheimer’s and Parkinson’s disease biomarker. The LOD was calculated as 15 pM. These results demonstrate that the proposed WLSI-based spectrometer could become a sensitive time-domain spectroscopic biosensing platform.

  12. Spectroscopic ellipsometric characterization of Si/Si(1-x)Ge(x) strained-layer superlattices

    NASA Technical Reports Server (NTRS)

    Yao, H.; Woollam, J. A.; Wang, P. J.; Tejwani, M. J.; Alterovitz, S. A.

    1993-01-01

    Spectroscopic ellipsometry (SE) was employed to characterize Si/Si(1-x)Ge(x) strained-layer superlattices. An algorithm was developed, using the available optical constants measured at a number of fixed x values of Ge composition, to compute the dielectric function spectrum of Si(1-x)Ge(x) at an arbitrary x value in the spectral range 17 to 5.6 eV. The ellipsometrically determined superlattice thicknesses and alloy compositional fractions were in excellent agreement with results from high-resolution x ray diffraction studies. The silicon surfaces of the superlattices were subjected to a 9:1 HF cleaning prior to the SE measurements. The HF solution removed silicon oxides on the semiconductor surface, and terminated the Si surface with hydrogen-silicon bonds, which were monitored over a period of several weeks, after the HF cleaning, by SE measurements. An equivalent dielectric layer model was established to describe the hydrogen-terminated Si surface layer. The passivated Si surface remained unchanged for greater than 2 h, and very little surface oxidation took place even over 3 to 4 days.

  13. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  15. The Molecular Basis of Rectal Cancer

    PubMed Central

    Shiller, Michelle; Boostrom, Sarah

    2015-01-01

    The majority of rectal carcinomas are sporadic in nature, and relevant testing for driver mutations to guide therapy is important. A thorough family history is necessary and helpful in elucidating a potential hereditary predilection for a patient's carcinoma. The adequate diagnosis of a heritable tendency toward colorectal carcinoma alters the management of a patient disease and permits the implementation of various surveillance algorithms as preventive measures. PMID:25733974

  16. On the account of gravitational perturbations in computer simulation technology of meteoroid complex formation and evolution

    NASA Astrophysics Data System (ADS)

    Kulikova, N. V.; Chepurova, V. M.

    2009-10-01

    So far we investigated the nonperturbation dynamics of meteoroid complexes. The numerical integration of the differential equations of motion in the N-body problem by the Everhart algorithm (N=2-6) and introduction of the intermediate hyperbolic orbits build on the base of the generalized problem of two fixed centers permit to take into account some gravitational perturbations.

  17. Hyperspectral laser-induced autofluorescence imaging of dental caries

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-01-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentine and pulp. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Laser induced autofluorescence was shown to be a useful method for early detection of demineralization. The existing studies involved either a single point spectroscopic measurements or imaging at a single spectral band. In the case of spectroscopic measurements, very little or no spatial information is acquired and the measured autofluorescence signal strongly depends on the position and orientation of the probe. On the other hand, single-band spectral imaging can be substantially affected by local spectral artefacts. Such effects can significantly interfere with automated methods for detection of early caries lesions. In contrast, hyperspectral imaging effectively combines the spatial information of imaging methods with the spectral information of spectroscopic methods providing excellent basis for development of robust and reliable algorithms for automated classification and analysis of hard dental tissues. In this paper, we employ 405 nm laser excitation of natural caries lesions. The fluorescence signal is acquired by a state-of-the-art hyperspectral imaging system consisting of a high-resolution acousto-optic tunable filter (AOTF) and a highly sensitive Scientific CMOS camera in the spectral range from 550 nm to 800 nm. The results are compared to the contrast obtained by near-infrared hyperspectral imaging technique employed in the existing studies on early detection of dental caries.

  18. Spectroscopic optical coherence tomography for ex vivo brain tumor analysis

    NASA Astrophysics Data System (ADS)

    Lenz, Marcel; Krug, Robin; Dillmann, Christopher; Gerling, Alexandra; Gerhardt, Nils C.; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.

    2017-02-01

    For neurosurgeries precise tumor resection is essential for the subsequent recovery of the patients since nearby healthy tissue that may be harmed has a huge impact on the life quality after the surgery. However, so far no satisfying methodology has been established to assist the surgeon during surgery to distinguish between healthy and tumor tissue. Optical Coherence Tomography (OCT) potentially enables non-contact in vivo image acquisition at penetration depths of 1-2 mm with a resolution of approximately 1-15 μm. To analyze the potential of OCT for distinction between brain tumors and healthy tissue, we used a commercially available Thorlabs Callisto system to measure healthy tissue and meningioma samples ex vivo. All samples were measured with the OCT system and three dimensional datasets were generated. Afterwards they were sent to the pathology for staining with hematoxylin and eosin and then investigated with a bright field microscope to verify the tissue type. This is the actual gold standard for ex vivo analysis. The images taken by the OCT system exhibit variations in the structure for different tissue types, but these variations may not be objectively evaluated from raw OCT images. Since an automated distinction between tumor and healthy tissue would be highly desirable to guide the surgeon, we applied Spectroscopic Optical Coherence Tomography to further enhance the differences between the tissue types. Pattern recognition and machine learning algorithms were applied to classify the derived spectroscopic information. Finally, the classification results are analyzed in comparison to the histological analysis of the samples.

  19. BeerOz, a set of Matlab routines for the quantitative interpretation of spectrophotometric measurements of metal speciation in solution

    NASA Astrophysics Data System (ADS)

    Brugger, Joël

    2007-02-01

    The modelling of the speciation and mobility of metals under surface and hydrothermal conditions relies on the availability of accurate thermodynamic properties for all relevant minerals, aqueous species, gases and surface species. Spectroscopic techniques obeying the Beer-Lambert law can be used to obtain thermodynamic properties for reactions among aqueous species (e.g., ligand substitution; protonation). BeerOz is a set of Matlab routines designed to perform both qualitative and quantitative analysis of spectroscopic data following the Beer-Lambert law. BeerOz is modular and can be customised for particular experimental strategies or for simultaneous refinement of several datasets obtained using different techniques. Distribution of species calculations are performed using an implementation of the EQBRM code, which allows for customised activity coefficient calculations. BeerOz also contains routines to study the n-dimensional solution space, in order to provide realistic estimates of errors and test for the existence of multiple local minima and correlation between the different refined variables. The paper reviews the physical principles underlying the qualitative and quantitative analysis of spectroscopic data collected on aqueous speciation, in particular for studying successive ligand replacement reactions, and presents the non-linear least-squares algorithm implemented in BeerOz. The discussion is illustrated using UV-Vis spectra collected on acidic Fe(III) solutions containing varying LiCl concentrations, and showing the change from the hexaaquo Fe(H 2O) 63+ complex to the tetrahedral FeCl 4- complex.

  20. Photometric and Spectroscopic Footprint Corrections in the Sloan Digital Sky Survey’s 6th Data Release

    NASA Astrophysics Data System (ADS)

    Specian, Mike A.; Szalay, Alex S.

    2016-04-01

    We identify and correct numerous errors within the photometric and spectroscopic footprints (SFs) of the Sloan Digital Sky Survey’s (SDSS) 6th data release (DR6). Within the SDSS’s boundaries hundreds of millions of objects have been detected. Yet we present evidence that the boundaries themselves contain a significant number of mistakes that are being revealed for the first time within this paper. Left unaddressed, these can introduce systematic biases into galaxy clustering statistics. Using the DR6 Main Galaxy Sample (MGS) targets as tracers, we reveal inconsistencies between the photometric and SF definitions provided in the Catalog Archive Server (CAS), and the measurements of targets therein. First, we find that 19.7 deg2 of the DR6 photometric footprint are devoid of MGS targets. In volumes of radii 7 {h}-1 {Mpc}, this can cause errors in the expected number of galaxies to exceed 60%. Second, we identify seven areas that were erroneously included or excluded from the SF. Moreover, the tiling algorithm that positioned spectroscopic fibers during and between DRs caused many areas along the edge of the SF to be significantly undersampled relative to the footprint’s interior. Through our corrections, we increase the completeness 2.2% by trimming 3.6% of the area from the existing SF. The sum total of these efforts has generated the most accurate description of the SDSS DR6 footprints ever created.

  1. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding

    NASA Astrophysics Data System (ADS)

    Nilsen, Morgan; Sikström, Fredrik; Christiansson, Anna-Karin; Ancona, Antonio

    2017-11-01

    The automated laser beam butt welding process is sensitive to positioning the laser beam with respect to the joint because a small offset may result in detrimental lack of sidewall fusion. This problem is even more pronounced in case of narrow gap butt welding, where most of the commercial automatic joint tracing systems fail to detect the exact position and size of the gap. In this work, a dual vision and spectroscopic sensing approach is proposed to trace narrow gap butt joints during laser welding. The system consists of a camera with suitable illumination and matched optical filters and a fast miniature spectrometer. An image processing algorithm of the camera recordings has been developed in order to estimate the laser spot position relative to the joint position. The spectral emissions from the laser induced plasma plume have been acquired by the spectrometer, and based on the measurements of the intensities of selected lines of the spectrum, the electron temperature signal has been calculated and correlated to variations of process conditions. The individual performances of these two systems have been experimentally investigated and evaluated offline by data from several welding experiments, where artificial abrupt as well as gradual deviations of the laser beam out of the joint were produced. Results indicate that a combination of the information provided by the vision and spectroscopic systems is beneficial for development of a hybrid sensing system for joint tracing.

  2. The Optical Gravitational Lensing Experiment. Eclipsing Binary Stars in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2003-03-01

    We present the catalog of 2580 eclipsing binary stars detected in 4.6 square degree area of the central parts of the Large Magellanic Cloud. The photometric data were collected during the second phase of the OGLE microlensing search from 1997 to 2000. The eclipsing objects were selected with the automatic search algorithm based on an artificial neural network. Basic statistics of eclipsing stars are presented. Also, the list of 36 candidates of detached eclipsing binaries for spectroscopic study and for precise LMC distance determination is provided. The full catalog is accessible from the OGLE Internet archive.

  3. A hybrid incremental projection method for thermal-hydraulics applications

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong

    2016-07-01

    A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.

  4. A hybrid incremental projection method for thermal-hydraulics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.

    In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less

  5. A hybrid incremental projection method for thermal-hydraulics applications

    DOE PAGES

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...

    2016-07-01

    In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less

  6. Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI

    PubMed Central

    Bhattacharya, Ipshita; Jacob, Mathews

    2017-01-01

    Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875

  7. High accuracy transit photometry of the planet OGLE-TR-113b with a new deconvolution-based method

    NASA Astrophysics Data System (ADS)

    Gillon, M.; Pont, F.; Moutou, C.; Bouchy, F.; Courbin, F.; Sohy, S.; Magain, P.

    2006-11-01

    A high accuracy photometry algorithm is needed to take full advantage of the potential of the transit method for the characterization of exoplanets, especially in deep crowded fields. It has to reduce to the lowest possible level the negative influence of systematic effects on the photometric accuracy. It should also be able to cope with a high level of crowding and with large-scale variations of the spatial resolution from one image to another. A recent deconvolution-based photometry algorithm fulfills all these requirements, and it also increases the resolution of astronomical images, which is an important advantage for the detection of blends and the discrimination of false positives in transit photometry. We made some changes to this algorithm to optimize it for transit photometry and used it to reduce NTT/SUSI2 observations of two transits of OGLE-TR-113b. This reduction has led to two very high precision transit light curves with a low level of systematic residuals, used together with former photometric and spectroscopic measurements to derive new stellar and planetary parameters in excellent agreement with previous ones, but significantly more precise.

  8. Applications of machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet stars

    NASA Astrophysics Data System (ADS)

    Morello, Giuseppe; Morris, P. W.; Van Dyk, S. D.; Marston, A. P.; Mauerhan, J. C.

    2018-01-01

    We have investigated and applied machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet (WR) candidates. Objects taken from the Spitzer Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) catalogue of the infrared objects in the Galactic plane can be classified into different stellar populations based on the colours inferred from their broad-band photometric magnitudes [J, H and Ks from 2 Micron All Sky Survey (2MASS), and the four Spitzer/IRAC bands]. The algorithms tested in this pilot study are variants of the k-nearest neighbours approach, which is ideal for exploratory studies of classification problems where interrelations between variables and classes are complicated. The aims of this study are (1) to provide an automated tool to select reliable WR candidates and potentially other classes of objects, (2) to measure the efficiency of infrared colour selection at performing these tasks and (3) to lay the groundwork for statistically inferring the total number of WR stars in our Galaxy. We report the performance results obtained over a set of known objects and selected candidates for which we have carried out follow-up spectroscopic observations, and confirm the discovery of four new WR stars.

  9. Quantum plug n’ play: modular computation in the quantum regime

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Modi, Kavan; Vedral, Vlatko; Gu, Mile

    2018-01-01

    Classical computation is modular. It exploits plug n’ play architectures which allow us to use pre-fabricated circuits without knowing their construction. This bestows advantages such as allowing parts of the computational process to be outsourced, and permitting individual circuit components to be exchanged and upgraded. Here, we introduce a formal framework to describe modularity in the quantum regime. We demonstrate a ‘no-go’ theorem, stipulating that it is not always possible to make use of quantum circuits without knowing their construction. This has significant consequences for quantum algorithms, forcing the circuit implementation of certain quantum algorithms to be rebuilt almost entirely from scratch after incremental changes in the problem—such as changing the number being factored in Shor’s algorithm. We develop a workaround capable of restoring modularity, and apply it to design a modular version of Shor’s algorithm that exhibits increased versatility and reduced complexity. In doing so we pave the way to a realistic framework whereby ‘quantum chips’ and remote servers can be invoked (or assembled) to implement various parts of a more complex quantum computation.

  10. Particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.

    2015-06-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.

  11. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters intended to be detected: the lower this limit is, the higher the false detection rates are. A detailed evaluation is performed with breakdown results by crater dimension and image or surface type, permitting to realize that automated detections in large crater datasets in HiRISE imagery datasets with 25cm/pixel resolution can be successfully done (high correct and low false positive detections) until a crater dimension of about 8-10 m or 32-40 pixels. [1] Martins L, Pina P. Marques JS, Silveira M, 2009, Crater detection by a boosting approach. IEEE Geoscience and Remote Sensing Letters 6: 127-131. [2] Salamuniccar G, Loncaric S, Pina P. Bandeira L., Saraiva J, 2011, MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets. Planetary and Space Science 59: 111-131. [3] Bandeira L, Ding W, Stepinski T, 2012, Detection of sub-kilometer craters in high resolution planetary images using shape and texture features. Advances in Space Research 49: 64-74.

  12. Evolution and advanced technology. [of Flight Telerobotic Servicer

    NASA Technical Reports Server (NTRS)

    Ollendorf, Stanford; Pennington, Jack E.; Hansen, Bert, III

    1990-01-01

    The NASREM architecture with its standard interfaces permits development and evolution of the Flight Telerobotic Servicer to greater autonomy. Technologies in control strategies for an arm with seven DOF, including a safety system containing skin sensors for obstacle avoidance, are being developed. Planning and robotic execution software includes symbolic task planning, world model data bases, and path planning algorithms. Research over the last five years has led to the development of laser scanning and ranging systems, which use coherent semiconductor laser diodes for short range sensing. The possibility of using a robot to autonomously assemble space structures is being investigated. A control framework compatible with NASREM is being developed that allows direct global control of the manipulator. Researchers are developing systems that permit an operator to quickly reconfigure the telerobot to do new tasks safely.

  13. Study of mathematical modeling of communication systems transponders and receivers

    NASA Technical Reports Server (NTRS)

    Walsh, J. R.

    1972-01-01

    The modeling of communication receivers is described at both the circuit detail level and at the block level. The largest effort was devoted to developing new models at the block modeling level. The available effort did not permit full development of all of the block modeling concepts envisioned, but idealized blocks were developed for signal sources, a variety of filters, limiters, amplifiers, mixers, and demodulators. These blocks were organized into an operational computer simulation of communications receiver circuits identified as the frequency and time circuit analysis technique (FATCAT). The simulation operates in both the time and frequency domains, and permits output plots or listings of either frequency spectra or time waveforms from any model block. Transfer between domains is handled with a fast Fourier transform algorithm.

  14. Installation of automatic control at experimental breeder reactor II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, H.A.; Booty, W.F.; Chick, D.R.

    1985-08-01

    The Experimental Breeder Reactor II (EBR-II) has been modified to permit automatic control capability. Necessary mechanical and electrical changes were made on a regular control rod position; motor, gears, and controller were replaced. A digital computer system was installed that has the programming capability for varied power profiles. The modifications permit transient testing at EBR-II. Experiments were run that increased power linearly as much as 4 MW/s (16% of initial power of 25 MW(thermal)/s), held power constant, and decreased power at a rate no slower than the increase rate. Thus the performance of the automatic control algorithm, the mechanical andmore » electrical control equipment, and the qualifications of the driver fuel for future power change experiments were all demonstrated.« less

  15. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  16. High-accuracy and high-sensitivity spectroscopic measurement of dinitrogen pentoxide (N2O5) in an atmospheric simulation chamber using a quantum cascade laser.

    PubMed

    Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong

    2017-12-04

    A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.

  17. Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis

    PubMed Central

    Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.

    2017-01-01

    Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477

  18. Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.

  19. BEER ANALYSIS OF KEPLER AND CoRoT LIGHT CURVES. I. DISCOVERY OF KEPLER-76b: A HOT JUPITER WITH EVIDENCE FOR SUPERROTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faigler, S.; Tal-Or, L.; Mazeh, T.

    We present the first case in which the BEER algorithm identified a hot Jupiter in the Kepler light curve, and its reality was confirmed by orbital solutions based on follow-up spectroscopy. The companion Kepler-76b was identified by the BEER algorithm, which detected the BEaming (sometimes called Doppler boosting) effect together with the Ellipsoidal and Reflection/emission modulations (BEER), at an orbital period of 1.54 days, suggesting a planetary companion orbiting the 13.3 mag F star. Further investigation revealed that this star appeared in the Kepler eclipsing binary catalog with estimated primary and secondary eclipse depths of 5 Multiplication-Sign 10{sup -3} andmore » 1 Multiplication-Sign 10{sup -4}, respectively. Spectroscopic radial velocity follow-up observations with Tillinghast Reflector Echelle Spectrograph and SOPHIE confirmed Kepler-76b as a transiting 2.0 {+-} 0.26 M{sub Jup} hot Jupiter. The mass of a transiting planet can be estimated from either the beaming or the ellipsoidal amplitude. The ellipsoidal-based mass estimate of Kepler-76b is consistent with the spectroscopically measured mass while the beaming-based estimate is significantly inflated. We explain this apparent discrepancy as evidence for the superrotation phenomenon, which involves eastward displacement of the hottest atmospheric spot of a tidally locked planet by an equatorial superrotating jet stream. This phenomenon was previously observed only for HD 189733b in the infrared. We show that a phase shift of 10. Degree-Sign 3 {+-} 2. Degree-Sign 0 of the planet reflection/emission modulation, due to superrotation, explains the apparently inflated beaming modulation, resolving the ellipsoidal/beaming amplitude discrepancy. Kepler-76b is one of very few confirmed planets in the Kepler light curves that show BEER modulations and the first to show superrotation evidence in the Kepler band. Its discovery illustrates for the first time the ability of the BEER algorithm to detect short-period planets and brown dwarfs.« less

  20. Exorcising the Ghost in the Machine: Synthetic Spectral Data Cubes for Assessing Big Data Algorithms

    NASA Astrophysics Data System (ADS)

    Araya, M.; Solar, M.; Mardones, D.; Hochfärber, T.

    2015-09-01

    The size and quantity of the data that is being generated by large astronomical projects like ALMA, requires a paradigm change in astronomical data analysis. Complex data, such as highly sensitive spectroscopic data in the form of large data cubes, are not only difficult to manage, transfer and visualize, but they make traditional data analysis techniques unfeasible. Consequently, the attention has been placed on machine learning and artificial intelligence techniques, to develop approximate and adaptive methods for astronomical data analysis within a reasonable computational time. Unfortunately, these techniques are usually sub optimal, stochastic and strongly dependent of the parameters, which could easily turn into “a ghost in the machine” for astronomers and practitioners. Therefore, a proper assessment of these methods is not only desirable but mandatory for trusting them in large-scale usage. The problem is that positively verifiable results are scarce in astronomy, and moreover, science using bleeding-edge instrumentation naturally lacks of reference values. We propose an Astronomical SYnthetic Data Observations (ASYDO), a virtual service that generates synthetic spectroscopic data in the form of data cubes. The objective of the tool is not to produce accurate astrophysical simulations, but to generate a large number of labelled synthetic data, to assess advanced computing algorithms for astronomy and to develop novel Big Data algorithms. The synthetic data is generated using a set of spectral lines, template functions for spatial and spectral distributions, and simple models that produce reasonable synthetic observations. Emission lines are obtained automatically using IVOA's SLAP protocol (or from a relational database) and their spectral profiles correspond to distributions in the exponential family. The spatial distributions correspond to simple functions (e.g., 2D Gaussian), or to scalable template objects. The intensity, broadening and radial velocity of each line is given by very simple and naive physical models, yet ASYDO's generic implementation supports new user-made models, which potentially allows adding more realistic simulations. The resulting data cube is saved as a FITS file, also including all the tables and images used for generating the cube. We expect to implement ASYDO as a virtual observatory service in the near future.

  1. High-pressure cell for terahertz time-domain spectroscopy.

    PubMed

    Zhang, Wei; Nickel, Daniel; Mittleman, Daniel

    2017-02-06

    We introduce a sample cell that can be used for pressure-dependent terahertz time-domain spectroscopy. Compared with traditional far-IR spectroscopy with a diamond anvil cell, the larger aperture permits measurements down to much lower frequencies as low as 3.3 cm-1 (0.1 THz), giving access to new spectroscopic results. The pressure tuning range reaches up to 34.4 MPa, while the temperature range is from 100 to 473 K. With this large range of tuning parameters, we are able to map out phase diagrams of materials based on their THz spectrum, as well as to track the changing of the THz spectrum within a single phase as a function of temperature and pressure. Pressure-dependent THz-TDS results for nitrogen and R-camphor are shown as an example.

  2. NO plume mapping by laser-radar techniques.

    PubMed

    Edner, H; Sunesson, A; Svanberg, S

    1988-09-01

    Mapping of NO plumes by using laser-radar techniques has been demonstrated with a mobile differential absorption lidar system. The system was equipped with a narrow-linewidth Nd:YAG-pumped dye laser that, with doubling and mixing, generated pulse energies of 3-5 mJ at 226 nm, with a linewidth of 1pm. This permitted range-resolved measurements of NO, with a range of about 500 m. The detection limit was estimated to 3 microg/m(3), with an integration interval of 350 m. Spectroscopic studies on the gamma(0, 0) bandhead near 226.8 nm were performed with 1-pm resolution, and the differential absorption cross section was determined to be (6.6 +/- 0.6) x 10(-22) m(2), with a wavelength difference of 12 pm.

  3. Microscopic aspects of the effect of friction reducers at the lubrication limit. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mansot, J. L.

    1984-01-01

    An attempt is made to analytically model the physicochemical properties of lubricants and their capacity to reduce friction. A technique of frozen fracturing of the lubricants was employed to study the dispersion of additives throughout a lubricant. Adsorption was observed at the liquid-solid interface, which was the region where the solid and lubricant met, and the molecular dispersion of the additive enhanced the effectiveness of the lubricant. The electrically conductive characteristics of the lubricant at the friction interface indicated the presence of tunneling effects. The Bethe model was used to examine the relationship between the coefficient of friction and the variation of interface thickness. The electron transport permitted an inelastic tunnel electron spectroscopic investigation of the molecular transformations undergone by the additive during friction episodes.

  4. Multiple-Point Mass Flux Measurement System Using Rayleigh Scattering

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Elam, Kristie A.; Clem, Michelle M.

    2009-01-01

    A multiple-point Rayleigh scattering diagnostic is being developed to provide mass flux measurements in gas flows. Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous density, temperature, and velocity measurements. Rayleigh scattered light from a focused 18 Watt continuous-wave laser beam is directly imaged through a solid Fabry-Perot etalon onto a CCD detector which permits spectral analysis of the light. The spatial resolution of the measurements is governed by the locations of interference fringes, which can be changed by altering the etalon characteristics. A prototype system has been used to acquire data in a Mach 0.56 flow to demonstrate feasibility of using this system to provide mass flux measurements. Estimates of measurement uncertainty and recommendations for system improvements are presented

  5. LEGO-NMR spectroscopy: a method to visualize individual subunits in large heteromeric complexes.

    PubMed

    Mund, Markus; Overbeck, Jan H; Ullmann, Janina; Sprangers, Remco

    2013-10-18

    Seeing the big picture: Asymmetric macromolecular complexes that are NMR active in only a subset of their subunits can be prepared, thus decreasing NMR spectral complexity. For the hetero heptameric LSm1-7 and LSm2-8 rings NMR spectra of the individual subunits of the complete complex are obtained, showing a conserved RNA binding site. This LEGO-NMR technique makes large asymmetric complexes accessible to detailed NMR spectroscopic studies. © 2013 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of Creative Commons the Attribution Non-Commercial NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.

  6. Diffusion algorithms and data reduction routine for onsite real-time launch predictions for the transport of Delta-Thor exhaust effluents

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.

    1976-01-01

    The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.

  7. Symmetric quantum fully homomorphic encryption with perfect security

    NASA Astrophysics Data System (ADS)

    Liang, Min

    2013-12-01

    Suppose some data have been encrypted, can you compute with the data without decrypting them? This problem has been studied as homomorphic encryption and blind computing. We consider this problem in the context of quantum information processing, and present the definitions of quantum homomorphic encryption (QHE) and quantum fully homomorphic encryption (QFHE). Then, based on quantum one-time pad (QOTP), we construct a symmetric QFHE scheme, where the evaluate algorithm depends on the secret key. This scheme permits any unitary transformation on any -qubit state that has been encrypted. Compared with classical homomorphic encryption, the QFHE scheme has perfect security. Finally, we also construct a QOTP-based symmetric QHE scheme, where the evaluate algorithm is independent of the secret key.

  8. Automatic mission planning algorithms for aerial collection of imaging-specific tasks

    NASA Astrophysics Data System (ADS)

    Sponagle, Paul; Salvaggio, Carl

    2017-05-01

    The rapid advancement and availability of small unmanned aircraft systems (sUAS) has led to many novel exploitation tasks utilizing that utilize this unique aerial imagery data. Collection of this unique data requires novel flight planning to accomplish the task at hand. This work describes novel flight planning to better support structure-from-motion missions to minimize occlusions, autonomous and periodic overflight of reflectance calibration panels to permit more efficient and accurate data collection under varying illumination conditions, and the collection of imagery data to study optical properties such as the bidirectional reflectance distribution function without disturbing the target in sensitive or remote areas of interest. These novel mission planning algorithms will provide scientists with additional tools to meet their future data collection needs.

  9. DC servomechanism parameter identification: a Closed Loop Input Error approach.

    PubMed

    Garrido, Ruben; Miranda, Roger

    2012-01-01

    This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Spacecraft alignment estimation. [for onboard sensors

    NASA Technical Reports Server (NTRS)

    Shuster, Malcolm D.; Bierman, Gerald J.

    1988-01-01

    A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter.

  11. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images.

    PubMed

    Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli

    2010-05-13

    Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.

  12. Understanding reconstructed Dante spectra using high resolution spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J., E-mail: may13@llnl.gov; Widmann, K.; Kemp, G. E.

    2016-11-15

    The Dante is an 18 channel filtered diode array used at the National Ignition Facility (NIF) to measure the spectrally and temporally resolved radiation flux between 50 eV and 20 keV from various targets. The absolute flux is determined from the radiometric calibration of the x-ray diodes, filters, and mirrors and a reconstruction algorithm applied to the recorded voltages from each channel. The reconstructed spectra are very low resolution with features consistent with the instrument response and are not necessarily consistent with the spectral emission features from the plasma. Errors may exist between the reconstructed spectra and the actual emissionmore » features due to assumptions in the algorithm. Recently, a high resolution convex crystal spectrometer, VIRGIL, has been installed at NIF with the same line of sight as the Dante. Spectra from L-shell Ag and Xe have been recorded by both VIRGIL and Dante. Comparisons of these two spectroscopic measurements yield insights into the accuracy of the Dante reconstructions.« less

  13. Hyperspectral Imaging and SPA-LDA Quantitative Analysis for Detection of Colon Cancer Tissue

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Zhang, D.; Wang, Ch.; Dai, B.; Zhao, M.; Li, B.

    2018-05-01

    Hyperspectral imaging (HSI) has been demonstrated to provide a rapid, precise, and noninvasive method for cancer detection. However, because HSI contains many data, quantitative analysis is often necessary to distill information useful for distinguishing cancerous from normal tissue. To demonstrate that HSI with our proposed algorithm can make this distinction, we built a Vis-NIR HSI setup and made many spectral images of colon tissues, and then used a successive projection algorithm (SPA) to analyze the hyperspectral image data of the tissues. This was used to build an identification model based on linear discrimination analysis (LDA) using the relative reflectance values of the effective wavelengths. Other tissues were used as a prediction set to verify the reliability of the identification model. The results suggest that Vis-NIR hyperspectral images, together with the spectroscopic classification method, provide a new approach for reliable and safe diagnosis of colon cancer and could lead to advances in cancer diagnosis generally.

  14. GaiaGrid : Its Implications and Implementation

    NASA Astrophysics Data System (ADS)

    Ansari, S. G.; Lammers, U.; Ter Linden, M.

    2005-12-01

    Gaia is an ESA space mission to determine positions of 1 billion objects in the Galaxy at micro-arcsecond precision. The data analysis and processing requirements of the mission involves about 20 institutes across Europe, each providing specific algorithms for specific tasks, which range from relativistic effects on positional determination, classification, astrometric binary star detection, photometric analysis, spectroscopic analysis etc. In an initial phase, a study has been ongoing over the past three years to determine the complexity of Gaia's data processing. Two processing categories have materialised: core and shell. While core deals with routine data processing, shell tasks are algorithms to carry out data analysis, which involves the Gaia Community at large. For this latter category, we are currently experimenting with use of Grid paradigms to allow access to the core data and to augment processing power to simulate and analyse the data in preparation for the actual mission. We present preliminary results and discuss the sociological impact of distributing the tasks amongst the community.

  15. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  16. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  17. Automated Cervical Screening and Triage, Based on HPV Testing and Computer-Interpreted Cytology.

    PubMed

    Yu, Kai; Hyun, Noorie; Fetterman, Barbara; Lorey, Thomas; Raine-Bennett, Tina R; Zhang, Han; Stamps, Robin E; Poitras, Nancy E; Wheeler, William; Befano, Brian; Gage, Julia C; Castle, Philip E; Wentzensen, Nicolas; Schiffman, Mark

    2018-04-11

    State-of-the-art cervical cancer prevention includes human papillomavirus (HPV) vaccination among adolescents and screening/treatment of cervical precancer (CIN3/AIS and, less strictly, CIN2) among adults. HPV testing provides sensitive detection of precancer but, to reduce overtreatment, secondary "triage" is needed to predict women at highest risk. Those with the highest-risk HPV types or abnormal cytology are commonly referred to colposcopy; however, expert cytology services are critically lacking in many regions. To permit completely automatable cervical screening/triage, we designed and validated a novel triage method, a cytologic risk score algorithm based on computer-scanned liquid-based slide features (FocalPoint, BD, Burlington, NC). We compared it with abnormal cytology in predicting precancer among 1839 women testing HPV positive (HC2, Qiagen, Germantown, MD) in 2010 at Kaiser Permanente Northern California (KPNC). Precancer outcomes were ascertained by record linkage. As additional validation, we compared the algorithm prospectively with cytology results among 243 807 women screened at KPNC (2016-2017). All statistical tests were two-sided. Among HPV-positive women, the algorithm matched the triage performance of abnormal cytology. Combined with HPV16/18/45 typing (Onclarity, BD, Sparks, MD), the automatable strategy referred 91.7% of HPV-positive CIN3/AIS cases to immediate colposcopy while deferring 38.4% of all HPV-positive women to one-year retesting (compared with 89.1% and 37.4%, respectively, for typing and cytology triage). In the 2016-2017 validation, the predicted risk scores strongly correlated with cytology (P < .001). High-quality cervical screening and triage performance is achievable using this completely automated approach. Automated technology could permit extension of high-quality cervical screening/triage coverage to currently underserved regions.

  18. Cloud and Aerosol Retrieval for the 2001 GLAS Satellite Lidar Mission

    NASA Technical Reports Server (NTRS)

    Hart, William D.; Palm, Stephen P.; Spinhirne, James D.

    2000-01-01

    The Geoscience Laser Altimeter System (GLAS) is scheduled for launch in July of 2001 aboard the Ice, Cloud and Land Elevation Satellite (ICESAT). In addition to being a precision altimeter for mapping the height of the Earth's icesheets, GLAS will be an atmospheric lidar, sensitive enough to detect gaseous, aerosol, and cloud backscatter signals, at horizontal and vertical resolutions of 175 and 75m, respectively. GLAS will be the first lidar to produce temporally continuous atmospheric backscatter profiles with nearly global coverage (94-degree orbital inclination). With a projected operational lifetime of five years, GLAS will collect approximately six billion lidar return profiles. The large volume of data dictates that operational analysis algorithms, which need to keep pace with the data yield of the instrument, must be efficient. So, we need to evaluate the ability of operational algorithms to detect atmospheric constituents that affect global climate. We have to quantify, in a statistical manner, the accuracy and precision of GLAS cloud and aerosol observations. Our poster presentation will show the results of modeling studies that are designed to reveal the effectiveness and sensitivity of GLAS in detecting various atmospheric cloud and aerosol features. The studies consist of analyzing simulated lidar returns. Simulation cases are constructed either from idealized renditions of atmospheric cloud and aerosol layers or from data obtained by the NASA ER-2 Cloud Lidar System (CLS). The fabricated renditions permit quantitative evaluations of operational algorithms to retrieve cloud and aerosol parameters. The use of observational data permits the evaluations of performance for actual atmospheric conditions. The intended outcome of the presentation is that climatology community will be able to use the results of these studies to evaluate and quantify the impact of GLAS data upon atmospheric modeling efforts.

  19. FAMIAS - A userfriendly new software tool for the mode identification of photometric and spectroscopic times series

    NASA Astrophysics Data System (ADS)

    Zima, W.

    2008-12-01

    FAMIAS (Frequency Analysis and Mode Identification for AsteroSeismology) is a collection of state-of-the-art software tools for the analysis of photometric and spectroscopic time series data. It is one of the deliverables of the Work Package NA5: Asteroseismology of the European Coordination Action in Helio- and Asteroseismology (HELAS1 ). Two main sets of tools are incorporated in FAMIAS. The first set allows to search for pe- riodicities in the data using Fourier and non-linear least-squares fitting algorithms. The other set allows to carry out a mode identification for the detected pulsation frequencies to deter- mine their pulsational quantum numbers, the harmonic degree, ℓ, and the azimuthal order, m. For the spectroscopic mode identification, the Fourier parameter fit method and the moment method are available. The photometric mode identification is based on pre-computed grids of atmospheric parameters and non-adiabatic observables, and uses the method of amplitude ratios and phase differences in different filters. The types of stars to which FAMIAS is appli- cable are main-sequence pulsators hotter than the Sun. This includes the Gamma Dor stars, Delta Sct stars, the slowly pulsating B stars and the Beta Cep stars - basically all pulsating main-sequence stars, for which empirical mode identification is required to successfully carry out asteroseismology. The complete manual for FAMIAS is published in a special issue of Communications in Asteroseismology, Vol 155. The homepage of FAMIAS2 provides the possibility to download the software and to read the on-line documentation.

  20. A handheld wireless device for diffuse optical spectroscopic assessment of infantile hemangiomas

    NASA Astrophysics Data System (ADS)

    Fong, Christopher J.; Flexman, Molly; Hoi, Jennifer W.; Geller, Lauren; Garzon, Maria; Kim, Hyun K.; Hielscher, Andreas H.

    2013-03-01

    Infantile hemangiomas (IH) are common vascular growths that occur in 5-10% of neonates and have the potential to cause disfiguring and even life-threatening complications. With no objective tool to monitor IH, a handheld wireless device (HWD) that uses diffuse optical spectroscopy has been developed for use in assessment of IH by measurements in absolute oxygenated and deoxygenated hemoglobin concentration as well as scattering in tissue. Reconstructions of these variables can be computed using a multispectral evolution algorithm. We validated the new system by experimental studies using phantom experiments and a clinical study is under way to assess the utility of DOI for IH.

  1. The 1981 NASA ASEE Summer Faculty Fellowship Program, volume 2

    NASA Technical Reports Server (NTRS)

    Robertson, N. G.; Huang, C. J.

    1981-01-01

    A collection of papers on miscellaneous subjects in aerospace research is presented. Topics discussed are: (1) Langmuir probe theory and the problem of anisotropic collection; (2) anthropometric program analysis of reach and body movement; (3) analysis of IV characteristics of negatively biased panels in a magnetoplasma; (4) analytic solution to classical two body drag problem; (5) fast variable step size integration algorithm for computer simulations of physiological systems; (6) spectroscopic experimental computer assisted empirical model for the production of energetics of excited oxygen molecules formed by atom recombination shuttle tile surfaces; and (7) capillary priming characteristics of dual passage heat pipe in zero-g.

  2. VizieR Online Data Catalog: OGLE eclipsing binaries in LMC (Wyrzykowski+, 2003)

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.

    2003-09-01

    We present the catalog of 2580 eclipsing binary stars detected in 4.6 square degree area of the central parts of the Large Magellanic Cloud. The photometric data were collected during the second phase of the OGLE microlensing search from 1997 to 2000. The eclipsing objects were selected with the automatic search algorithm based on an artificial neural network. Basic statistics of eclipsing stars are presented. Also, the list of 36 candidates of detached eclipsing binaries for spectroscopic study and for precise LMC distance determination is provided. The full catalog is accessible from the OGLE Internet archive. (2 data files).

  3. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  4. VLSI Architectures and CAD

    DTIC Science & Technology

    1989-11-01

    considerable promise is a variation of the familiar Lempel - Ziv adaptive data compression scheme that permits a straightforward mapping to hardware...types of data . The UNIX " compress " implementation is based upon Terry Welch’s 1984 variation of the Lempel - Ziv method (LZW). One flaw lies in the fact...or more; it must effec- tively compress all types of data (i.e. the algorithm must be universal); the implementation must be contained within a small

  5. The Quest for Pi

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Borwein, Jonathan M.; Borwein, Peter B.; Plouffe, Simon

    1996-01-01

    This article gives a brief history of the analysis and computation of the mathematical constant Pi=3.14159 ..., including a number of the formulas that have been used to compute Pi through the ages. Recent developments in this area are then discussed in some detail, including the recent computation of Pi to over six billion decimal digits using high-order convergent algorithms, and a newly discovered scheme that permits arbitrary individual hexadecimal digits of Pi to be computed.

  6. Stereovision Imaging in Smart Mobile Phone Using Add on Prisms

    NASA Astrophysics Data System (ADS)

    Bar-Magen Numhauser, Jonathan; Zalevsky, Zeev

    2014-03-01

    In this work we present the use of a prism-based add on component installed on top of a smart phone to achieve stereovision capabilities using iPhone mobile operating system. Through these components and the combination of the appropriate application programming interface and mathematical algorithms the obtained results will permit the analysis of possible enhancements for new uses to such system, in a variety of areas including medicine and communications.

  7. Photometry of the long period dwarf nova GY Hya

    NASA Astrophysics Data System (ADS)

    Bruch, Albert; Monard, Berto

    2017-08-01

    Although comparatively bright, the cataclysmic variable GY Hya has not attracted much attention in the past. As part of a project to better characterize such systems photometrically, we observed light curves in white light, each spanning several hours, at Bronberg Observatory, South Africa, in 2004 and 2005, and at the Observatório do Pico dos Dias, Brazil, in 2014 and 2016. These data permit to study orbital modulations and their variations from season to season. The orbital period, already known from spectroscopic observations of Peters and Thorstensen (2005), is confirmed through strong ellipsoidal variations of the mass donor star in the system and the presence of eclipses of both components. A refined period of 0.34723972 (6) days and revised ephemeries are derived. Seasonal changes in the average orbital light curve can qualitatively be explained by variations of the contribution of a hot spot to the system light together with changes of the disk radius. The amplitude of the ellipsoidal variations and the eclipse contact phases permit to put some constraints on the mass ratio, orbital inclination and the relative brightness of the primary and secondary components. There are some indications that the disk radius during quiescence, expressed in units of the component separation, is smaller than in other dwarf novae.

  8. Luminous Obscured AGN Unveiled in the Stripe 82 X-ray Survey

    NASA Astrophysics Data System (ADS)

    LaMassa, Stephanie; Glikman, Eilat; Brusa, Marcella; Rigby, Jane; Tasnim Ananna, Tonima; Stern, Daniel; Lira, Paulina; Urry, Meg; Salvato, Mara; Alexandroff, Rachael; Allevato, Viola; Cardamone, Carolin; Civano, Francesca Maria; Coppi, Paolo; Farrah, Duncan; Komossa, S.; Lanzuisi, Giorgio; Marchesi, Stefano; Richards, Gordon; Trakhtenbrot, Benny; Treister, Ezequiel

    2018-01-01

    Stripe 82X is a wide-area (30 deg2) X-ray survey overlapping the legacy Sloan Digital Sky Survey (SDSS) Stripe 82 field, designed to uncover rare, high luminosity active galactic nuclei (AGN). We report on the results of an on-going near-infrared (NIR) spectroscopic campaign to follow-up reddened AGN candidates with Palomar TripleSpec, Keck NIRSPEC, and Gemini GNIRS. We identified 8 AGN in our bright NIR sample (K < 16, Vega), selected to have red R-K colors (> 4, Vega); four of these sources had existing optical spectra in SDSS. We targeted four out of 34 obscured AGN candidates in our faint NIR sample (K > 17, Vega), all of which are undetected in the single-epoch SDSS imaging, making them the best candidates for the most obscured and/or the most distant reddend AGN in Stripe 82X. All twelve sources are Type 1 AGN, with the FWHM of at least one permitted emission line exceeding 1300 km/s. We find that our nearly complete bright NIR sample (12/13 obscured AGN candidates have spectroscopic redshifts) is more distant (z > 0.5) than a matched sample of blue Type 1 AGN from Stripe 82X; these AGN tend to be more luminous than their blue, unobscured counterparts. Results from our pilot program of faint NIR-selected obscured AGN candidates demonstrate that our selection recovers reddened quasars missed by SDSS.

  9. J-Plus: Measuring Ha Emission Line Flux In The Nearby Universe

    NASA Astrophysics Data System (ADS)

    Logroño-García, Rafael; Vilella-Rojo, Gonzalo; López-San Juan, Carlos; Varela, Jesús; Viironen, Kerttu

    2017-10-01

    In the present presentation we aim to validate the methodology designed to extract the Ha emission line flux from J-PLUS data, a twelve optical filter survey carried out with the 2 deg² field of view T80Cam camera, mounted at the JAST/T80 telescope in the OAJ, Teruel, Spain. We use the information of the twelve J-PLUS bands, including the J0660 narrow-band filter located at rest-frame Ha, over 42 deg² to extract de-reddened and [NII] decontaminated Ha emission line fluxes of 46 star-forming regions with previous SDSS and/or CALIFA spectroscopic information. The agreement of the J-PLUS photometric Ha flux and the spectroscopic one is remarkable, with a ratio R = 1,01 +/- 0,27. This demonstrates that we are able to recover reliable Ha fluxes from J-PLUS photometric data. With an expected final area of 8,500 deg2, the large J-PLUS footprint will permit the study of the spatially resolved star formation rate of thousands nearby galaxies at z 0,015, as well as the influence of the close environment. As an illustrative example, we looked to the close pair of interacting galaxies NGC3994 and NGC3995, finding an enhancement of the star formation rate not only in the central part of NGC3994 but also in outer parts of the disc.

  10. Rotational spectroscopy of isotopic vinyl cyanide, H2CCHCN, in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Müller, Holger S. P.; Belloche, Arnaud; Menten, Karl M.; Comito, Claudia; Schilke, Peter

    2008-09-01

    The rotational spectra of singly substituted 13C and 15N isotopic species of vinyl cyanide have been studied in natural abundances between 64 and 351 GHz. In combination with previous results, greatly improved spectroscopic parameters have been obtained which in turn helped to identify transitions of the 13C species for the first time in space through a molecular line survey of the extremely line-rich interstellar source Sagittarius B2(N) in the 3 mm region with some additional observations at 2 mm. The 13C species are detected in two compact (˜2.3″), hot (170 K) cores with a column density of ˜3.8×10 and 1.1×10cm, respectively. In the main source, the so-called “Large Molecule Heimat”, we derive an abundance of 2.9×10 for each 13C species relative to H2. An isotopic ratio 12C/13C of 21 has been measured. Based on a comparison to the column densities measured for the 13C species of ethyl cyanide also detected in this survey, it is suggested that the two hot cores of Sgr B2(N) are in different evolutionary stages. Supplementary laboratory data for the main isotopic species recorded between 92 and 342 GHz permitted an improvement of its spectroscopic parameters as well.

  11. Estimation of mating system parameters in plant populations using marker loci with null alleles.

    PubMed

    Ross, H A

    1986-06-01

    An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.

  12. CPDES3: A preconditioned conjugate gradient solver for linear asymmetric matrix equations arising from coupled partial differential equations in three dimensions

    NASA Astrophysics Data System (ADS)

    Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.

    1988-11-01

    Many physical problems require the solution of coupled partial differential equations on three-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES3 allows each spatial operator to have 7, 15, 19, or 27 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect induces which is vectorizable on some of the newer scientific computers.

  13. CPDES2: A preconditioned conjugate gradient solver for linear asymmetric matrix equations arising from coupled partial differential equations in two dimensions

    NASA Astrophysics Data System (ADS)

    Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.

    1988-11-01

    Many physical problems require the solution of coupled partial differential equations on two-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES2 allows each spatial operator to have 5 or 9 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect indices which is vectorizable on some of the newer scientific computers.

  14. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  15. Detection of pigment network in dermoscopy images using supervised machine learning and structural analysis.

    PubMed

    García Arroyo, Jose Luis; García Zapirain, Begoña

    2014-01-01

    By means of this study, a detection algorithm for the "pigment network" in dermoscopic images is presented, one of the most relevant indicators in the diagnosis of melanoma. The design of the algorithm consists of two blocks. In the first one, a machine learning process is carried out, allowing the generation of a set of rules which, when applied over the image, permit the construction of a mask with the pixels candidates to be part of the pigment network. In the second block, an analysis of the structures over this mask is carried out, searching for those corresponding to the pigment network and making the diagnosis, whether it has pigment network or not, and also generating the mask corresponding to this pattern, if any. The method was tested against a database of 220 images, obtaining 86% sensitivity and 81.67% specificity, which proves the reliability of the algorithm. © 2013 The Authors. Published by Elsevier Ltd. All rights reserved.

  16. Study to assess the importance of errors introduced by applying NOAA 6 and NOAA 7 AVHRR data as an estimator of vegetative vigor: Feasibility study of data normalization

    NASA Technical Reports Server (NTRS)

    Duggin, M. J. (Principal Investigator); Piwinski, D.

    1982-01-01

    The use of NOAA AVHRR data to map and monitor vegetation types and conditions in near real-time can be enhanced by using a portion of each GAC image that is larger than the central 25% now considered. Enlargement of the cloud free image data set can permit development of a series of algorithms for correcting imagery for ground reflectance and for atmospheric scattering anisotropy within certain accuracy limits. Empirical correction algorithms used to normalize digital radiance or VIN data must contain factors for growth stage and for instrument spectral response. While it is not possible to correct for random fluctuations in target radiance, it is possible to estimate the necessary radiance difference between targets in order to provide target discrimination and quantification within predetermined limits of accuracy. A major difficulty lies in the lack of documentation of preprocessing algorithms used on AVHRR digital data.

  17. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  18. Automated reliability assessment for spectroscopic redshift measurements

    NASA Astrophysics Data System (ADS)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST. A table of the reclassified VVDS redshifts and reliability is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A53

  19. A Data System for a Rapid Evaluation Class of Subscale Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Hogge, Edward F.; Quach, Cuong C.; Vazquez, Sixto L.; Hill, Boyd L.

    2011-01-01

    A low cost, rapid evaluation, test aircraft is used to develop and test airframe damage diagnosis algorithms at Langley Research Center as part of NASA's Aviation Safety Program. The remotely operated subscale aircraft is instrumented with sensors to monitor structural response during flight. Data is collected for good and compromised airframe configurations to develop data driven models for diagnosing airframe state. This paper describes the data acquisition system (DAS) of the rapid evaluation test aircraft. A PC/104 form factor DAS was developed to allow use of Matlab, Simulink simulation code in Langley's existing subscale aircraft flight test infrastructure. The small scale of the test aircraft permitted laboratory testing of the actual flight article under controlled conditions. The low cost and modularity of the DAS permitted adaptation to various flight experiment requirements.

  20. Chemical factor analysis of skin cancer FTIR-FEW spectroscopic data

    NASA Astrophysics Data System (ADS)

    Bruch, Reinhard F.; Sukuta, Sydney

    2002-03-01

    Chemical Factor Analysis (CFA) algorithms were applied to transform complex Fourier transform infrared fiberoptical evanescent wave (FTIR-FEW) normal and malignant skin tissue spectra into factor spaces for analysis and classification. The factor space approach classified melanoma beyond prior pathological classifications related to specific biochemical alterations to health states in cluster diagrams allowing diagnosis with more biochemical specificity, resolving biochemical component spectra and employing health state eigenvector angular configurations as disease state sensors. This study demonstrated a wealth of new information from in vivo FTIR-FEW spectral tissue data, without extensive a priori information or clinically invasive procedures. In particular, we employed a variety of methods used in CFA to select the rank of spectroscopic data sets of normal benign and cancerous skin tissue. We used the Malinowski indicator function (IND), significance level and F-Tests to rank our data matrices. Normal skin tissue, melanoma and benign tumors were modeled by four, two and seven principal abstract factors, respectively. We also showed that the spectrum of the first eigenvalue was equivalent to the mean spectrum. The graphical depiction of angular disparities between the first abstract factors can be adopted as a new way to characterize and diagnose melanoma cancer.

  1. Spectral CT data acquisition with Medipix3.1

    NASA Astrophysics Data System (ADS)

    Walsh, M. F.; Nik, S. J.; Procz, S.; Pichotka, M.; Bell, S. T.; Bateman, C. J.; Doesburg, R. M. N.; De Ruiter, N.; Chernoglazov, A. I.; Panta, R. K.; Butler, A. P. H.; Butler, P. H.

    2013-10-01

    This paper describes the acquisition of spectral CT images using the Medipix3.1 in spectroscopic mode, in which the chip combines 2 × 2 pixel clusters to increase the number of energy thresholds and counters from 2 to 8. During preliminary measurements, it was observed that the temperature, DAC and equalisation stability of the Medipix3.1 outperformed the Medipix3.0, while maintaining similar imaging quality. In this paper, the Medipix3.1 chips were assembled in a quad (2 × 2) layout, with the four ASICs bump-bonded to a silicon semiconductor doped as an np-junction diode. To demonstrate the biological imaging quality that is possible with the Medipix3.1, an image of a mouse injected with gold nano-particle contrast agent was obtained. CT acquisition in spectroscopic mode was enabled and examined by imaging a customised phantom containing multiple contrast agents and biological materials. These acquisitions showed a limitation of imaging performance depending on the counter used. Despite this, identification of multiple materials in the phantom was demonstrated using an in-house material decomposition algorithm. Furthermore, gold nano-particles were separated from biological tissues and bones within the mouse by means of image rendering.

  2. Robotic Spectroscopy at the Dark Sky Observatory

    NASA Astrophysics Data System (ADS)

    Rosenberg, Daniel E.; Gray, Richard O.; Mashburn, Jonathan; Swenson, Aaron W.; McGahee, Courtney E.; Briley, Michael M.

    2018-06-01

    Spectroscopic observations using the classification-resolution Gray-Miller spectrograph attached to the Dark Sky Observatory 32 inch telescope (Appalachian State University, North Carolina) have been automated with a robotic script called the “Robotic Spectroscopist” (RS). RS runs autonomously during the night and controls all operations related to spectroscopic observing. At the heart of RS are a number of algorithms that first select and center the target star in the field of an imaging camera and then on the spectrograph slit. RS monitors the observatory weather station, and suspends operations and closes the dome when weather conditions warrant, and can reopen and resume observations when the weather improves. RS selects targets from a list using a queue-observing protocol based on observer-assigned priorities, but also uses target-selection criteria based on weather conditions, especially seeing. At the end of the night RS transfers the data files to the main campus, where they are reduced with an automatic pipeline. Our experience has shown that RS is more efficient and consistent than a human observer, and produces data sets that are ideal for automatic reduction. RS should be adaptable for use at other similar observatories, and so we are making the code freely available to the astronomical community.

  3. Practical protocols for fast histopathology by Fourier transform infrared spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Keith, Frances N.; Reddy, Rohith K.; Bhargava, Rohit

    2008-02-01

    Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that combines the molecular selectivity of spectroscopy with the spatial specificity of optical microscopy. We demonstrate a new concept in obtaining high fidelity data using commercial array detectors coupled to a microscope and Michelson interferometer. Next, we apply the developed technique to rapidly provide automated histopathologic information for breast cancer. Traditionally, disease diagnoses are based on optical examinations of stained tissue and involve a skilled recognition of morphological patterns of specific cell types (histopathology). Consequently, histopathologic determinations are a time consuming, subjective process with innate intra- and inter-operator variability. Utilizing endogenous molecular contrast inherent in vibrational spectra, specially designed tissue microarrays and pattern recognition of specific biochemical features, we report an integrated algorithm for automated classifications. The developed protocol is objective, statistically significant and, being compatible with current tissue processing procedures, holds potential for routine clinical diagnoses. We first demonstrate that the classification of tissue type (histology) can be accomplished in a manner that is robust and rigorous. Since data quality and classifier performance are linked, we quantify the relationship through our analysis model. Last, we demonstrate the application of the minimum noise fraction (MNF) transform to improve tissue segmentation.

  4. A Fast Variant of 1H Spectroscopic U-FLARE Imaging Using Adjusted Chemical Shift Phase Encoding

    NASA Astrophysics Data System (ADS)

    Ebel, Andreas; Dreher, Wolfgang; Leibfritz, Dieter

    2000-02-01

    So far, fast spectroscopic imaging (SI) using the U-FLARE sequence has provided metabolic maps indirectly via Fourier transformation (FT) along the chemical shift (CS) dimension and subsequent peak integration. However, a large number of CS encoding steps Nω is needed to cover the spectral bandwidth and to achieve sufficient spectral resolution for peak integration even if the number of resonance lines is small compared to Nω and even if only metabolic images are of interest and not the spectra in each voxel. Other reconstruction algorithms require extensive prior knowledge, starting values, and/or model functions. An adjusted CS phase encoding scheme (APE) can be used to overcome these drawbacks. It incorporates prior knowledge only about the resonance frequencies present in the sample. Thus, Nω can be reduced by a factor of 4 for many 1H in vivo studies while no spectra have to be reconstructed, and no additional user interaction, prior knowledge, starting values, or model function are required. Phantom measurements and in vivo experiments on rat brain have been performed at 4.7 T to test the feasibility of the method for proton SI.

  5. Continuous wave optical spectroscopic system for use in magnetic resonance imaging scanners for the measurement of changes in hemoglobin oxygenation states in humans

    NASA Astrophysics Data System (ADS)

    Hulvershorn, Justin; Bloy, Luke; Leigh, John S.; Elliott, Mark A.

    2003-09-01

    A continuous wave near infrared three-wavelength laser diode spectroscopic (NIRS) system designed for use in magnetic resonance imaging (MRI) scanners is described. This system measures in vivo changes in the concentrations of oxyhemoglobin (HbO) and deoxyhemoglobin (Hb) in humans. An algorithm is implemented to map changes in light intensity to changes in the concentrations of Hb and HbO. The system's signal to noise ratio is 3.4×103 per wavelength on an intralipid phantom with 10 Hz resolution. To demonstrate the system's performance in vivo, data taken on the human forearm during arterial occlusion, as well as data taken on the forehead during extended breath holds, are presented. The results show that the instrument is an extremely sensitive detector of hemodynamic changes in human tissue at high temporal resolution. NIRS directly measures changes in the concentrations of hemoglobin species. For this reason, NIRS will be useful in determining the sources of MRI signal changes in the body due to hemodynamic causes, while the precise anatomic information provided by MRI will aid in localizing NIRS contrast and improving the accuracy of models of light transport through tissue.

  6. Toward spectroscopically accurate global ab initio potential energy surface for the acetylene-vinylidene isomerization

    NASA Astrophysics Data System (ADS)

    Han, Huixian; Li, Anyang; Guo, Hua

    2014-12-01

    A new full-dimensional global potential energy surface (PES) for the acetylene-vinylidene isomerization on the ground (S0) electronic state has been constructed by fitting ˜37 000 high-level ab initio points using the permutation invariant polynomial-neural network method with a root mean square error of 9.54 cm-1. The geometries and harmonic vibrational frequencies of acetylene, vinylidene, and all other stationary points (two distinct transition states and one secondary minimum in between) have been determined on this PES. Furthermore, acetylene vibrational energy levels have been calculated using the Lanczos algorithm with an exact (J = 0) Hamiltonian. The vibrational energies up to 12 700 cm-1 above the zero-point energy are in excellent agreement with the experimentally derived effective Hamiltonians, suggesting that the PES is approaching spectroscopic accuracy. In addition, analyses of the wavefunctions confirm the experimentally observed emergence of the local bending and counter-rotational modes in the highly excited bending vibrational states. The reproduction of the experimentally derived effective Hamiltonians for highly excited bending states signals the coming of age for the ab initio based PES, which can now be trusted for studying the isomerization reaction.

  7. A rapid ATR-FTIR spectroscopic method for detection of sibutramine adulteration in tea and coffee based on hierarchical cluster and principal component analyses.

    PubMed

    Cebi, Nur; Yilmaz, Mustafa Tahsin; Sagdic, Osman

    2017-08-15

    Sibutramine may be illicitly included in herbal slimming foods and supplements marketed as "100% natural" to enhance weight loss. Considering public health and legal regulations, there is an urgent need for effective, rapid and reliable techniques to detect sibutramine in dietetic herbal foods, teas and dietary supplements. This research comprehensively explored, for the first time, detection of sibutramine in green tea, green coffee and mixed herbal tea using ATR-FTIR spectroscopic technique combined with chemometrics. Hierarchical cluster analysis and PCA principle component analysis techniques were employed in spectral range (2746-2656cm -1 ) for classification and discrimination through Euclidian distance and Ward's algorithm. Unadulterated and adulterated samples were classified and discriminated with respect to their sibutramine contents with perfect accuracy without any false prediction. The results suggest that existence of the active substance could be successfully determined at the levels in the range of 0.375-12mg in totally 1.75g of green tea, green coffee and mixed herbal tea by using FTIR-ATR technique combined with chemometrics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Characterization of SiGe/Ge heterostructures and graded layers using variable angle spectroscopic ellipsometry

    NASA Technical Reports Server (NTRS)

    Croke, E. T.; Wang, K. L.; Heyd, A. R.; Alterovitz, S. A.; Lee, C. H.

    1996-01-01

    Variable angle spectroscopic ellipsometry (VASE) has been used to characterize Si(x)Ge(1-x)/Ge superlattices (SLs) grown on Ge substrates and thick Si(x)Ge(1-x)/Ge heterostructures grown on Si substrates. Our VASE analysis yielded the thicknesses and alloy compositions of all layers within the optical penetration depth of the surface. In addition, strain effects were observed in the VASE results for layers under both compressive and tensile strain. Results for the SL structures were found to be in close agreement with high resolution x-ray diffraction measurements made on the same samples. The VASE analysis has been upgraded to characterize linearly graded Si(x)Ge(1-x) buffer layers. The algorithm has been used to determine the total thickness of the buffer layer along with the start and end alloy composition by breaking the total thickness into many (typically more than 20) equal layers. Our ellipsometric results for 1 (mu)m buffer layers graded in the ranges 0.7 less than or = x less than or = 1.0, and 0.5 less than or = x less than or = 1.0 are presented, and compare favorably with the nominal values.

  9. Active coherent laser spectrometer for remote detection and identification of chemicals

    NASA Astrophysics Data System (ADS)

    MacLeod, Neil A.; Weidmann, Damien

    2012-10-01

    Currently, there exists a capability gap for the remote detection and identification of threat chemicals. We report here on the development of an Active Coherent Laser Spectrometer (ACLaS) operating in the thermal infrared and capable of multi-species stand-off detection of chemicals at sub ppm.m levels. A bench top prototype of the instrument has been developed using distributed feedback mid-infrared quantum cascade lasers as spectroscopic sources. The instrument provides active eye-safe illumination of a topographic target and subsequent spectroscopic analysis through optical heterodyne detection of the diffuse backscattered field. Chemical selectivity is provided by the combination of the narrow laser spectral bandwidth (typically < 2 MHz) and frequency tunability that allows the recording of the full absorption spectrum of any species within the instrument line of sight. Stand-off detection at distances up to 12 m has been demonstrated on light molecules such as H2O, CH4 and N2O. A physical model of the stand-off detection scenario including ro-vibrational molecular absorption parameters was used in conjunction with a fitting algorithm to retrieve quantitative mixing ratio information on multiple absorbers.

  10. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  11. Fast Transformation of Temporal Plans for Efficient Execution

    NASA Technical Reports Server (NTRS)

    Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul

    1998-01-01

    Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.

  12. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  13. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  14. Preliminary user's manuals for DYNA3D and DYNAP. [In FORTRAN IV for CDC 7600 and Cray-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallquist, J. O.

    1979-10-01

    This report provides a user's manual for DYNA3D, an explicit three-dimensional finite-element code for analyzing the large deformation dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node solid elements, and the equations of motion are integrated by the central difference method. Post-processors for DYNA3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories. A user's manual formore » DYNAP is also provided. 23 figures.« less

  15. Classical Statistics and Statistical Learning in Imaging Neuroscience

    PubMed Central

    Bzdok, Danilo

    2017-01-01

    Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896

  16. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  17. Systems aspects of COBE science data compression

    NASA Technical Reports Server (NTRS)

    Freedman, I.; Boggess, E.; Seiler, E.

    1993-01-01

    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.

  18. Full cycle rapid scan EPR deconvolution algorithm.

    PubMed

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Full cycle rapid scan EPR deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.

  20. A New Platform for Profiling Degradation-Related Impurities Via Exploiting the Opportunities Offered by Ion-Selective Electrodes: Determination of Both Diatrizoate Sodium and Its Cytotoxic Degradation Product.

    PubMed

    Riad, Safaa M; Abd El-Rahman, Mohamed K; Fawaz, Esraa M; Shehata, Mostafa A

    2018-05-01

    Although the ultimate goal of administering active pharmaceutical ingredients (APIs) is to save countless lives, the presence of impurities and/or degradation products in APIs or formulations may cause harmful physiological effects. Today, impurity profiling (i.e., the identity as well as the quantity of impurity in a pharmaceutical) is receiving critical attention from regulatory authorities. Despite the predominant use of spectroscopic and chromatographic methods over electrochemical methods for impurity profiling of APIs, this work investigates the opportunities offered by electroanalytical methods, particularly, ion-selective electrodes (ISEs), for profiling degradation-related impurities (DRIs) compared with conventional spectroscopic and chromatographic methods. For a meaningful comparison, diatrizoate sodium (DTA) was chosen as the anionic X-ray contrast agent based on its susceptibility to deacetylation into its cytotoxic and mutagenic degradation product, 3,5-diamino-2,4,6 triiodobenzoic acid (DTB). This cationic diamino compound can be also detected as an impurity in the final product because it is used as a synthetic precursor for the synthesis of DTA. In this study, four novel sensitive and selective sensors for the determination of both DTA and its cytotoxic degradation products are presented. Sensors I and II were developed for the determination of the anionic drug, DTA, and sensors III and IV were developed for the determination of the cationic cytotoxic impurity. The use of these novel sensors not only provides a stability-indicating method for the selective determination of DTA in the presence of its degradation product, but also permits DRI profiling. Moreover, a great advantage of these proposed ISE systems is their higher sensitivity for the quantification of DTB relative to other spectroscopic and chromatographic methods, so it can measure trace amounts of DTB impurities in DTA bulk powder and pharmaceutical formulation without a need for preliminary separation.

  1. Searching for intermediate-mass black holes via optical variability

    NASA Astrophysics Data System (ADS)

    Adler-Levine, Ryan; Moran, Edward C.; Kay, Laura

    2018-01-01

    A handful of nearby dwarf galaxies with intermediate-mass black holes (IMBHs) in their nuclei display significant optical variability on short timescales. To investigate whether dwarf galaxy AGNs as a class exhibit similar variability, we have monitored a sample of low-mass galaxies that possess spectroscopically confirmed type 1 AGNs. However, because of the variations in seeing, focus, and guiding errors that occur in images taken at different epochs, analyses based on aperture photometry are ineffective. We have thus developed a new method for matching point-spread functions in images that permits use of image subtraction photometry techniques. Applying this method to our photometric data, we have confirmed that several galaxies with IMBHs are indeed variable, which suggests that variability can be used to search for IMBHs in low-mass galaxies whose emission-line properties are ambiguous.

  2. Hard X-ray and gamma-ray imaging spectroscopy for the next solar maximum

    NASA Technical Reports Server (NTRS)

    Hudson, H. S.; Crannell, C. J.; Dennis, B. R.; Spicer, D. S.; Davis, J. M.; Hurford, G. J.; Lin, R. P.

    1990-01-01

    The objectives and principles are described of a single spectroscopic imaging package that can provide effective imaging in the hard X- and gamma-ray ranges. Called the High-Energy Solar Physics (HESP) mission instrument for solar investigation, the device is based on rotating modulation collimators with germanium semiconductor spectrometers. The instrument is planned to incorporate thick modulation plates, and the range of coverage is discussed. The optics permit the coverage of high-contrast hard X-ray images from small- and medium-sized flares with large signal-to-noise ratios. The detectors allow angular resolution of less than 1 arcsec, time resolution of less than 1 arcsec, and spectral resolution of about 1 keV. The HESP package is considered an effective and important instrument for investigating the high-energy solar events of the near-term future efficiently.

  3. Newly discovered Wolf-Rayet and weak emission-line central stars of planetary nebulae

    NASA Astrophysics Data System (ADS)

    DePew, K.; Parker, Q. A.; Miszalski, B.; De Marco, O.; Frew, D. J.; Acker, A.; Kovacevic, A. V.; Sharp, R. G.

    2011-07-01

    We present the spectra of 32 previously unpublished confirmed and candidate Wolf-Rayet ([WR]) and weak emission-line (WELS) central stars of planetary nebulae (CSPNe). 18 stars have been discovered in the Macquarie/AAO/Strasbourg Hα (MASH) PN survey sample, and we have also uncovered 14 confirmed and candidate [WR]s and WELS among the CSPNe of previously known PNe. Spectral classifications have been undertaken using both Acker & Neiner and Crowther, De Marco & Barlow schemes. 22 members in this sample are identified as probable [WR]s; the remaining 10 appear to be WELS. Observations undertaken as part of the MASH spectroscopic survey have now increased the number of known [WR]s by ˜30 per cent. This will permit a better analysis of [WR] subclass distribution, metallicity effects and evolutionary sequences in these uncommon objects.

  4. Molecular hydrogen in the vicinity of NGC 7538 IRS 1 and IRS 2 - Temperature and ortho-to-para ratio

    NASA Technical Reports Server (NTRS)

    Hoban, Susan; Reuter, Dennis C.; Mumma, Michael J.; Storrs, Alex D.

    1991-01-01

    Near-infrared spectroscopic observations of the active star-forming region near NGC 7538 IRS 1 and IRS 2 were made. The relative intensities of the v = 1-0 Q(1), Q(3), and Q(5) lines of molecular hydrogen are used to calculate a rotational excitation temperature. Comparison of the measured intensity of the Q(2) transition relative to the intensity of Q(1) and Q(3) permitted the retrieval of the ratio of ortho-to-para hydrogen. It is found that an ortho-to-para ratio of between 1.6 and 2.35 is needed to explain the Q-branch line intensity ratios, depending on the excitation model used. This range in ortho-to-para ratios implies a range of molecular hydrogen formation temperature of approximately 105 K to 140 K.

  5. Near-infrared Raman spectroscopy of single optically trapped biological cells

    NASA Astrophysics Data System (ADS)

    Xie, Changan; Dinno, Mumtaz A.; Li, Yong-Qing

    2002-02-01

    We report on the development and testing of a compact laser tweezers Raman spectroscopy (LTRS) system. The system combines optical trapping and near-infrared Raman spectroscopy for manipulation and identification of single biological cells in solution. A low-power diode laser at 785 nm was used for both trapping and excitation for Raman spectroscopy of the suspended microscopic particles. The design of the LTRS system provides high sensitivity and permits real-time spectroscopic measurements of the biological sample. The system was calibrated by use of polystyrene microbeads and tested on living blood cells and on both living and dead yeast cells. As expected, different images and Raman spectra were observed for the different cells. The LTRS system may provide a valuable tool for the study of fundamental cellular processes and the diagnosis of cellular disorders.

  6. Magnetic fingerprint of individual Fe4 molecular magnets under compression by a scanning tunnelling microscope

    NASA Astrophysics Data System (ADS)

    Burgess, Jacob A. J.; Malavolti, Luigi; Lanzilotto, Valeria; Mannini, Matteo; Yan, Shichao; Ninova, Silviya; Totti, Federico; Rolf-Pissarczyk, Steffen; Cornia, Andrea; Sessoli, Roberta; Loth, Sebastian

    2015-09-01

    Single-molecule magnets (SMMs) present a promising avenue to develop spintronic technologies. Addressing individual molecules with electrical leads in SMM-based spintronic devices remains a ubiquitous challenge: interactions with metallic electrodes can drastically modify the SMM's properties by charge transfer or through changes in the molecular structure. Here, we probe electrical transport through individual Fe4 SMMs using a scanning tunnelling microscope at 0.5 K. Correlation of topographic and spectroscopic information permits identification of the spin excitation fingerprint of intact Fe4 molecules. Building from this, we find that the exchange coupling strength within the molecule's magnetic core is significantly enhanced. First-principles calculations support the conclusion that this is the result of confinement of the molecule in the two-contact junction formed by the microscope tip and the sample surface.

  7. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians.

    PubMed

    He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-03-12

    Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.

  8. Autonomous Flight Safety System - Phase III

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.

  9. [The etiological differentiation of neuromuscular produced dysphagia by x-ray cinematography].

    PubMed

    Brühlmann, W

    1991-12-01

    850 patients with dysphagia were examined by x-ray cinematography. On the basis of these examinations the normal events of swallowing are compared with the abnormalities observed. The technique is described. An algorithm has been developed depending on the presence of symmetry or asymmetry of the abnormalities and on muscle tone, which permits classification of the various aetiological groups. In addition, specific features of individual diseases often make it possible to arrive at a definite diagnosis.

  10. Powered Descent Trajectory Guidance and Some Considerations for Human Lunar Landing

    NASA Technical Reports Server (NTRS)

    Sostaric, Ronald R.

    2007-01-01

    The Autonomous Precision Landing and Hazard Detection and Avoidance Technology development (ALHAT) will enable an accurate (better than 100m) landing on the lunar surface. This technology will also permit autonomous (independent from ground) avoidance of hazards detected in real time. A preliminary trajectory guidance algorithm capable of supporting these tasks has been developed and demonstrated in simulations. Early results suggest that with expected improvements in sensor technology and lunar mapping, mission objectives are achievable.

  11. A cloud masking algorithm for EARLINET lidar systems

    NASA Astrophysics Data System (ADS)

    Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina

    2015-04-01

    Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.

  12. Effect of Non-rigid Registration Algorithms on Deformation Based Morphometry: A Comparative Study with Control and Williams Syndrome Subjects

    PubMed Central

    Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.

    2014-01-01

    Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439

  13. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  14. Adaptive Gaussian mixture models for pre-screening in GPR data

    NASA Astrophysics Data System (ADS)

    Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.

    2011-06-01

    Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.

  15. Satellite Snow-Cover Mapping: A Brief Review

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.

    1995-01-01

    Satellite snow mapping has been accomplished since 1966, initially using data from the reflective part of the electromagnetic spectrum, and now also employing data from the microwave part of the spectrum. Visible and near-infrared sensors can provide excellent spatial resolution from space enabling detailed snow mapping. When digital elevation models are also used, snow mapping can provide realistic measurements of snow extent even in mountainous areas. Passive-microwave satellite data permit global snow cover to be mapped on a near-daily basis and estimates of snow depth to be made, but with relatively poor spatial resolution (approximately 25 km). Dense forest cover limits both techniques and optical remote sensing is limited further by cloudcover conditions. Satellite remote sensing of snow cover with imaging radars is still in the early stages of research, but shows promise at least for mapping wet or melting snow using C-band (5.3 GHz) synthetic aperture radar (SAR) data. Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) data beginning with the launch of the first EOS platform in 1998. Digital maps will be produced that will provide daily, and maximum weekly global snow, sea ice and lake ice cover at 1-km spatial resolution. Statistics will be generated on the extent and persistence of snow or ice cover in each pixel for each weekly map, cloudcover permitting. It will also be possible to generate snow- and ice-cover maps using MODIS data at 250- and 500-m resolution, and to study and map snow and ice characteristics such as albedo. been under development. Passive-microwave data offer the potential for determining not only snow cover, but snow water equivalent, depth and wetness under all sky conditions. A number of algorithms have been developed to utilize passive-microwave brightness temperatures to provide information on snow cover and water equivalent. The variability of vegetative Algorithms are being developed to map global snow and ice cover using Earth Algorithms to map global snow cover using passive-microwave data have also cover and of snow grain size, globally, limits the utility of a single algorithm to map global snow cover.

  16. Real time optimization algorithm for wavefront sensorless adaptive optics OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan

    2017-02-01

    Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.

  17. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  18. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  19. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  20. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  1. Evolution in Cloud Population Statistics of the MJO: From AMIE Field Observations to Global-Cloud Permitting Models Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    This is a multi-institutional, collaborative project using a three-tier modeling approach to bridge field observations and global cloud-permitting models, with emphases on cloud population structural evolution through various large-scale environments. Our contribution was in data analysis for the generation of high value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: the development of a synergistic cloud and precipitation cloud classification that identify different cloud (e.g. shallow cumulus, cirrus) and precipitation types (shallow, deep, convective, stratiform) using profiling ARM observations and the development of a quantitative precipitation ratemore » retrieval algorithm using profiling ARM observations. Similar efforts have been developed in the past for precipitation (weather radars), but not for the millimeter-wavelength (cloud) radar deployed at the ARM sites.« less

  2. Automatic identification of cochlear implant electrode arrays for post-operative assessment

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Schuman, Theodore A.; Wright, Charles G.; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    Cochlear implantation is a procedure performed to treat profound hearing loss. Accurately determining the postoperative position of the implant in vivo would permit studying the correlations between implant position and hearing restoration. To solve this problem, we present an approach based on parametric Gradient Vector Flow snakes to segment the electrode array in post-operative CT. By combining this with existing methods for localizing intra-cochlear anatomy, we have developed a system that permits accurate assessment of the implant position in vivo. The system is validated using a set of seven temporal bone specimens. The algorithms were run on pre- and post-operative CTs of the specimens, and the results were compared to histological images. It was found that the position of the arrays observed in the histological images is in excellent agreement with the position of their automatically generated 3D reconstructions in the CT scans.

  3. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  4. The NASA computer aided design and test system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.; Juergensen, K.

    1973-01-01

    A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.

  5. Reference-free spectroscopic determination of fat and protein in milk in the visible and near infrared region below 1000nm using spatially resolved diffuse reflectance fiber probe.

    PubMed

    Bogomolov, Andrey; Belikova, Valeria; Galyanin, Vladislav; Melenteva, Anastasiia; Meyer, Hans

    2017-05-15

    New technique of diffuse reflectance spectroscopic analysis of milk fat and total protein content in the visible (Vis) and adjacent near infrared (NIR) region (400-995nm) has been developed and tested. Sample analysis was performed through a probe having eight 200-µm fiber channels forming a linear array. One of the end fibers was used for the illumination and other seven - for the spectroscopic detection of diffusely reflected light. One of the detection channels was used as a reference to normalize the spectra and to convert them into absorbance-equivalent units. The method has been tested experimentally using a designed sample set prepared from industrial raw milk standards with widely varying fat and protein content. To increase the modelling robustness all milk samples were measured in three different homogenization degrees. Comprehensive data analysis has shown the advantage of combining both spectral and spatial resolution in the same measurement and revealed the most relevant channels and wavelength regions. The modelling accuracy was further improved using joint variable selection and preprocessing optimization method based on the genetic algorithm. The root mean-square errors of different validation methods were below 0.10% for fat and below 0.08% for total protein content. Based on the present experimental data, it was computationally shown that the full-spectrum analysis in this method can be replaced by a sensor measurement at several specific wavelengths, for instance, using light-emitting diodes (LEDs) for illumination. Two optimal sensor configurations have been suggested: with nine LEDs for the analysis of fat and seven - for protein content. Both simulated sensors exhibit nearly the same component determination accuracy as corresponding full-spectrum analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Fourteenth Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the Extended Baryon Oscillation Spectroscopic Survey and from the Second Phase of the Apache Point Observatory Galactic Evolution Experiment

    NASA Astrophysics Data System (ADS)

    Abolfathi, Bela; Aguado, D. S.; Aguilar, Gabriela; Allende Prieto, Carlos; Almeida, Andres; Tasnim Ananna, Tonima; Anders, Friedrich; Anderson, Scott F.; Andrews, Brett H.; Anguiano, Borja; Aragón-Salamanca, Alfonso; Argudo-Fernández, Maria; Armengaud, Eric; Ata, Metin; Aubourg, Eric; Avila-Reese, Vladimir; Badenes, Carles; Bailey, Stephen; Balland, Christophe; Barger, Kathleen A.; Barrera-Ballesteros, Jorge; Bartosz, Curtis; Bastien, Fabienne; Bates, Dominic; Baumgarten, Falk; Bautista, Julian; Beaton, Rachael; Beers, Timothy C.; Belfiore, Francesco; Bender, Chad F.; Bernardi, Mariangela; Bershady, Matthew A.; Beutler, Florian; Bird, Jonathan C.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Blomqvist, Michael; Bolton, Adam S.; Boquien, Médéric; Borissova, Jura; Bovy, Jo; Andres Bradna Diaz, Christian; Nielsen Brandt, William; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Burgasser, Adam J.; Burtin, Etienne; Busca, Nicolás G.; Cañas, Caleb I.; Cano-Díaz, Mariana; Cappellari, Michele; Carrera, Ricardo; Casey, Andrew R.; Cervantes Sodi, Bernardo; Chen, Yanping; Cherinka, Brian; Chiappini, Cristina; Doohyun Choi, Peter; Chojnowski, Drew; Chuang, Chia-Hsun; Chung, Haeun; Clerc, Nicolas; Cohen, Roger E.; Comerford, Julia M.; Comparat, Johan; Correa do Nascimento, Janaina; da Costa, Luiz; Cousinou, Marie-Claude; Covey, Kevin; Crane, Jeffrey D.; Cruz-Gonzalez, Irene; Cunha, Katia; da Silva Ilha, Gabriele; Damke, Guillermo J.; Darling, Jeremy; Davidson, James W., Jr.; Dawson, Kyle; de Icaza Lizaola, Miguel Angel C.; de la Macorra, Axel; de la Torre, Sylvain; De Lee, Nathan; de Sainte Agathe, Victoria; Deconto Machado, Alice; Dell’Agli, Flavia; Delubac, Timothée; Diamond-Stanic, Aleksandar M.; Donor, John; José Downes, Juan; Drory, Niv; du Mas des Bourboux, Hélion; Duckworth, Christopher J.; Dwelly, Tom; Dyer, Jamie; Ebelke, Garrett; Davis Eigenbrot, Arthur; Eisenstein, Daniel J.; Elsworth, Yvonne P.; Emsellem, Eric; Eracleous, Michael; Erfanianfar, Ghazaleh; Escoffier, Stephanie; Fan, Xiaohui; Fernández Alvar, Emma; Fernandez-Trincado, J. G.; Cirolini, Rafael Fernando; Feuillet, Diane; Finoguenov, Alexis; Fleming, Scott W.; Font-Ribera, Andreu; Freischlad, Gordon; Frinchaboy, Peter; Fu, Hai; Gómez Maqueo Chew, Yilen; Galbany, Lluís; García Pérez, Ana E.; Garcia-Dias, R.; García-Hernández, D. A.; Garma Oehmichen, Luis Alberto; Gaulme, Patrick; Gelfand, Joseph; Gil-Marín, Héctor; Gillespie, Bruce A.; Goddard, Daniel; González Hernández, Jonay I.; Gonzalez-Perez, Violeta; Grabowski, Kathleen; Green, Paul J.; Grier, Catherine J.; Gueguen, Alain; Guo, Hong; Guy, Julien; Hagen, Alex; Hall, Patrick; Harding, Paul; Hasselquist, Sten; Hawley, Suzanne; Hayes, Christian R.; Hearty, Fred; Hekker, Saskia; Hernandez, Jesus; Hernandez Toledo, Hector; Hogg, David W.; Holley-Bockelmann, Kelly; Holtzman, Jon A.; Hou, Jiamin; Hsieh, Bau-Ching; Hunt, Jason A. S.; Hutchinson, Timothy A.; Hwang, Ho Seong; Jimenez Angel, Camilo Eduardo; Johnson, Jennifer A.; Jones, Amy; Jönsson, Henrik; Jullo, Eric; Sakil Khan, Fahim; Kinemuchi, Karen; Kirkby, David; Kirkpatrick, Charles C., IV; Kitaura, Francisco-Shu; Knapp, Gillian R.; Kneib, Jean-Paul; Kollmeier, Juna A.; Lacerna, Ivan; Lane, Richard R.; Lang, Dustin; Law, David R.; Le Goff, Jean-Marc; Lee, Young-Bae; Li, Hongyu; Li, Cheng; Lian, Jianhui; Liang, Yu; Lima, Marcos; Lin, Lihwai; Long, Dan; Lucatello, Sara; Lundgren, Britt; Mackereth, J. Ted; MacLeod, Chelsea L.; Mahadevan, Suvrath; Geimba Maia, Marcio Antonio; Majewski, Steven; Manchado, Arturo; Maraston, Claudia; Mariappan, Vivek; Marques-Chaves, Rui; Masseron, Thomas; Masters, Karen L.; McDermid, Richard M.; McGreer, Ian D.; Melendez, Matthew; Meneses-Goytia, Sofia; Merloni, Andrea; Merrifield, Michael R.; Meszaros, Szabolcs; Meza, Andres; Minchev, Ivan; Minniti, Dante; Mueller, Eva-Maria; Muller-Sanchez, Francisco; Muna, Demitri; Muñoz, Ricardo R.; Myers, Adam D.; Nair, Preethi; Nandra, Kirpal; Ness, Melissa; Newman, Jeffrey A.; Nichol, Robert C.; Nidever, David L.; Nitschelm, Christian; Noterdaeme, Pasquier; O’Connell, Julia; Oelkers, Ryan James; Oravetz, Audrey; Oravetz, Daniel; Aquino Ortíz, Erik; Osorio, Yeisson; Pace, Zach; Padilla, Nelson; Palanque-Delabrouille, Nathalie; Alonso Palicio, Pedro; Pan, Hsi-An; Pan, Kaike; Parikh, Taniya; Pâris, Isabelle; Park, Changbom; Peirani, Sebastien; Pellejero-Ibanez, Marcos; Penny, Samantha; Percival, Will J.; Perez-Fournon, Ismael; Petitjean, Patrick; Pieri, Matthew M.; Pinsonneault, Marc; Pisani, Alice; Prada, Francisco; Prakash, Abhishek; Queiroz, Anna Bárbara de Andrade; Raddick, M. Jordan; Raichoor, Anand; Barboza Rembold, Sandro; Richstein, Hannah; Riffel, Rogemar A.; Riffel, Rogério; Rix, Hans-Walter; Robin, Annie C.; Rodríguez Torres, Sergio; Román-Zúñiga, Carlos; Ross, Ashley J.; Rossi, Graziano; Ruan, John; Ruggeri, Rossana; Ruiz, Jose; Salvato, Mara; Sánchez, Ariel G.; Sánchez, Sebastián F.; Sanchez Almeida, Jorge; Sánchez-Gallego, José R.; Santana Rojas, Felipe Antonio; Santiago, Basílio Xavier; Schiavon, Ricardo P.; Schimoia, Jaderson S.; Schlafly, Edward; Schlegel, David; Schneider, Donald P.; Schuster, William J.; Schwope, Axel; Seo, Hee-Jong; Serenelli, Aldo; Shen, Shiyin; Shen, Yue; Shetrone, Matthew; Shull, Michael; Silva Aguirre, Víctor; Simon, Joshua D.; Skrutskie, Mike; Slosar, Anže; Smethurst, Rebecca; Smith, Verne; Sobeck, Jennifer; Somers, Garrett; Souter, Barbara J.; Souto, Diogo; Spindler, Ashley; Stark, David V.; Stassun, Keivan; Steinmetz, Matthias; Stello, Dennis; Storchi-Bergmann, Thaisa; Streblyanska, Alina; Stringfellow, Guy S.; Suárez, Genaro; Sun, Jing; Szigeti, Laszlo; Taghizadeh-Popp, Manuchehr; Talbot, Michael S.; Tang, Baitian; Tao, Charling; Tayar, Jamie; Tembe, Mita; Teske, Johanna; Thakar, Aniruddha R.; Thomas, Daniel; Tissera, Patricia; Tojeiro, Rita; Tremonti, Christy; Troup, Nicholas W.; Urry, Meg; Valenzuela, O.; van den Bosch, Remco; Vargas-González, Jaime; Vargas-Magaña, Mariana; Vazquez, Jose Alberto; Villanova, Sandro; Vogt, Nicole; Wake, David; Wang, Yuting; Weaver, Benjamin Alan; Weijmans, Anne-Marie; Weinberg, David H.; Westfall, Kyle B.; Whelan, David G.; Wilcots, Eric; Wild, Vivienne; Williams, Rob A.; Wilson, John; Wood-Vasey, W. M.; Wylezalek, Dominika; Xiao, Ting; Yan, Renbin; Yang, Meng; Ybarra, Jason E.; Yèche, Christophe; Zakamska, Nadia; Zamora, Olga; Zarrouk, Pauline; Zasowski, Gail; Zhang, Kai; Zhao, Cheng; Zhao, Gong-Bo; Zheng, Zheng; Zheng, Zheng; Zhou, Zhi-Min; Zhu, Guangtun; Zinn, Joel C.; Zou, Hu

    2018-04-01

    The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014–2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as “The Cannon” and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

  7. Quantification of bovine immunoglobulin G using transmission and attenuated total reflectance infrared spectroscopy.

    PubMed

    Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Shaw, R Anthony; Keefe, Gregory P

    2016-01-01

    In this study, we evaluated and compared the performance of transmission and attenuated total reflectance (ATR) infrared (IR) spectroscopic methods (in combination with quantification algorithms previously developed using partial least squares regression) for the rapid measurement of bovine serum immunoglobulin G (IgG) concentration, and detection of failure of transfer of passive immunity (FTPI) in dairy calves. Serum samples (n = 200) were collected from Holstein calves 1-11 days of age. Serum IgG concentrations were measured by the reference method of radial immunodiffusion (RID) assay, transmission IR (TIR) and ATR-IR spectroscopy-based assays. The mean IgG concentration measured by RID was 17.22 g/L (SD ±9.60). The mean IgG concentrations predicted by TIR and ATR-IR spectroscopy methods were 15.60 g/L (SD ±8.15) and 15.94 g/L (SD ±8.66), respectively. RID IgG concentrations were positively correlated with IgG levels predicted by TIR (r = 0.94) and ATR-IR (r = 0.92). The correlation between 2 IR spectroscopic methods was 0.94. Using an IgG concentration <10 g/L as the cut-point for FTPI cases, the overall agreement between TIR and ATR-IR methods was 94%, with a corresponding kappa value of 0.84. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for identifying FTPI by TIR were 0.87, 0.97, 0.91, 0.95, and 0.94, respectively. Corresponding values for ATR-IR were 0.87, 0.95, 0.86, 0.95, and 0.93, respectively. Both TIR and ATR-IR spectroscopic approaches can be used for rapid quantification of IgG level in neonatal bovine serum and for diagnosis of FTPI in dairy calves. © 2015 The Author(s).

  8. Exploring Relations Between BCG & Cluster Properties in the SPectroscopic IDentification of eROSITA Sources Survey from 0.05 < z < 0.3

    NASA Astrophysics Data System (ADS)

    Furnell, Kate E.; Collins, Chris A.; Kelvin, Lee S.; Clerc, Nicolas; Baldry, Ivan K.; Finoguenov, Alexis; Erfanianfar, Ghazaleh; Comparat, Johan; Schneider, Donald P.

    2018-04-01

    We present a sample of 329 low to intermediate redshift (0.05 < z < 0.3) brightest cluster galaxies (BCGs) in X-ray selected clusters from the SPectroscopic IDentification of eRosita Sources (SPIDERS) survey, a spectroscopic survey within Sloan Digital Sky Survey-IV (SDSS-IV). We define our BCGs by simultaneous consideration of legacy X-ray data from ROSAT, maximum likelihood outputs from an optical cluster-finder algorithm and visual inspection. Using SDSS imaging data, we fit Sérsic profiles to our BCGs in three bands (g, r, i) with SIGMA, a GALFIT-based software wrapper. We examine the reliability of our fits by running our pipeline on ˜104 psf-convolved model profiles injected into 8 random cluster fields; we then use the results of this analysis to create a robust subsample of 198 BCGs. We outline three cluster properties of interest: overall cluster X-ray luminosity (LX), cluster richness as estimated by REDMAPPER (λ) and cluster halo mass (M200), which is estimated via velocity dispersion. In general, there are significant correlations with BCG stellar mass between all three environmental properties, but no significant trends arise with either Sérsic index or effective radius. There is no major environmental dependence on the strength of the relation between effective radius and BCG stellar mass. Stellar mass therefore arises as the most important factor governing BCG morphology. Our results indicate that our sample consists of a large number of relaxed, mature clusters containing broadly homogeneous BCGs up to z ˜ 0.3, suggesting that there is little evidence for much ongoing structural evolution for BCGs in these systems.

  9. Spectroscopic remote sensing for material identification, vegetation characterization, and mapping

    USGS Publications Warehouse

    Kokaly, Raymond F.; Lewis, Paul E.; Shen, Sylvia S.

    2012-01-01

    Identifying materials by measuring and analyzing their reflectance spectra has been an important procedure in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow materials to be mapped across the landscape. With many existing airborne sensors and new satellite-borne sensors planned for the future, robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral feature analyses of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described. MICA is a module of the PRISM (Processing Routines in IDL for Spectroscopic Measurements) software, available to the public from the U.S. Geological Survey (USGS) at http://pubs.usgs.gov/of/2011/1155/. The core concepts of MICA include continuum removal and linear regression to compare key diagnostic absorption features in reference laboratory/field spectra and the spectra being analyzed. The reference spectra, diagnostic features, and threshold constraints are defined within a user-developed MICA command file (MCF). Building on several decades of experience in mineral mapping, a broadly-applicable MCF was developed to detect a set of minerals frequently occurring on the Earth's surface and applied to map minerals in the country-wide coverage of the 2007 Afghanistan HyMap data set. MICA has also been applied to detect sub-pixel oil contamination in marshes impacted by the Deepwater Horizon incident by discriminating the C-H absorption features in oil residues from background vegetation. These two recent examples demonstrate the utility of a spectroscopic approach to remote sensing for identifying and mapping the distributions of materials in imaging spectrometer data.

  10. DA white dwarfs from the LSS-GAC survey DR1: the preliminary luminosity and mass functions and formation rate

    NASA Astrophysics Data System (ADS)

    Rebassa-Mansergas, A.; Liu, X.-W.; Cojocaru, R.; Yuan, H.-B.; Torres, S.; García-Berro, E.; Xiang, M.-X.; Huang, Y.; Koester, D.; Hou, Y.; Li, G.; Zhang, Y.

    2015-06-01

    Modern large-scale surveys have allowed the identification of large numbers of white dwarfs. However, these surveys are subject to complicated target selection algorithms, which make it almost impossible to quantify to what extent the observational biases affect the observed populations. The LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope) Spectroscopic Survey of the Galactic anticentre (LSS-GAC) follows a well-defined set of criteria for selecting targets for observations. This advantage over previous surveys has been fully exploited here to identify a small yet well-characterized magnitude-limited sample of hydrogen-rich (DA) white dwarfs. We derive preliminary LSS-GAC DA white dwarf luminosity and mass functions. The space density and average formation rate of DA white dwarfs we derive are 0.83 ± 0.16 × 10-3 pc-3 and 5.42 ± 0.08 × 10-13 pc-3 yr-1, respectively. Additionally, using an existing Monte Carlo population synthesis code we simulate the population of single DA white dwarfs in the Galactic anticentre, under various assumptions. The synthetic populations are passed through the LSS-GAC selection criteria, taking into account all possible observational biases. This allows us to perform a meaningful comparison of the observed and simulated distributions. We find that the LSS-GAC set of criteria is highly efficient in selecting white dwarfs for spectroscopic observations (80-85 per cent) and that, overall, our simulations reproduce well the observed luminosity function. However, they fail at reproducing an excess of massive white dwarfs present in the observed mass function. A plausible explanation for this is that a sizable fraction of massive white dwarfs in the Galaxy are the product of white dwarf-white dwarf mergers.

  11. Spectroscopic analysis technique for arc-welding process control

    NASA Astrophysics Data System (ADS)

    Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel

    2005-09-01

    The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.

  12. Seasonal and Inter-Annual Patterns of Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2015-12-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally toxic) or red tides. Results presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during these two seasons.

  13. Orbits for 18 Visual Binaries and Two Double-line Spectroscopic Binaries Observed with HRCAM on the CTIO SOAR 4 m Telescope, Using a New Bayesian Orbit Code Based on Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.

    2017-11-01

    We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  14. The Extremely Luminous Quasar Survey (ELQS) in SDSS and the high-z bright-end Quasar Luminosity Function

    NASA Astrophysics Data System (ADS)

    Schindler, Jan-Torge; Fan, Xiaohui; McGreer, Ian

    2018-01-01

    Studies of the most luminous quasars at high redshift directly probe the evolution of the most massive black holes in the early Universe and their connection to massive galaxy formation. Unfortunately, extremely luminous quasars at high redshift are very rare objects. Only wide area surveys have a chance to constrain their population. The Sloan Digital Sky Survey (SDSS) nd the Baryon Oscillation Spectroscopic Survey (BOSS) have so far provided the most widely adopted measurements of the type I quasar luminosity function (QLF) at z>3. However, a careful re-examination of the SDSS quasar sample revealed that the SDSS quasar selection is in fact missing a significant fraction of $z~3$ quasars at the brightest end.We have identified the purely optical color selection of SDSS, where quasars at these redshifts are strongly contaminated by late-type dwarfs, and the spectroscopic incompleteness of the SDSS footprint as the main reasons. Therefore we have designed the Extremely Luminous Quasar Survey (ELQS), based on a novel near-infrared JKW2 color cut using WISE AllWISE and 2MASS all-sky photometry, to yield high completeness for very bright (i < 18.0) quasars in the redshift range of 2.8<= z<=5.0. It effectively uses Random Forest machine-learning algorithms on SDSS and WISE photometry for quasar-star classification and photometric redshift estimation.The ELQS is spectroscopically following up ~230 new quasar candidates in an area of ~12000 deg2 in the SDSS footprint, to obtain a well-defined and complete quasar sample for an accurate measurement of the bright-end quasar luminosity function (QLF) at 2.8<= z<=5.0. So far the ELQS has identified 75 bright new quasars in this redshift range and observations of the fall sky will continue until the end of the year. At the AAS winter meeting we will present the full spectroscopic results of the survey, including a re-estimation and extension of the high-z QLF toward higher luminosities.

  15. Near optimum digital phase locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1972-01-01

    Near optimum digital phase locked loops are derived utilizing nonlinear estimation theory. Nonlinear approximations are employed to yield realizable loop structures. Baseband equivalent loop gains are derived which under high signal to noise ratio conditions may be calculated off-line. Additional simplifications are made which permit the application of the Kalman filter algorithms to determine the optimum loop filter. Performance is evaluated by a theoretical analysis and by simulation. Theoretical and simulated results are discussed and a comparison to analog results is made.

  16. Phase transition in the countdown problem

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Luque, Bartolo

    2012-07-01

    We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.

  17. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  18. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  19. High resolution time of arrival estimation for a cooperative sensor system

    NASA Astrophysics Data System (ADS)

    Morhart, C.; Biebl, E. M.

    2010-09-01

    Distance resolution of cooperative sensors is limited by the signal bandwidth. For the transmission mainly lower frequency bands are used which are more narrowband than classical radar frequencies. To compensate this resolution problem the combination of a pseudo-noise coded pulse compression system with superresolution time of arrival estimation is proposed. Coded pulsecompression allows secure and fast distance measurement in multi-user scenarios which can easily be adapted for data transmission purposes (Morhart and Biebl, 2009). Due to the lack of available signal bandwidth the measurement accuracy degrades especially in multipath scenarios. Superresolution time of arrival algorithms can improve this behaviour by estimating the channel impulse response out of a band-limited channel view. For the given test system the implementation of a MUSIC algorithm permitted a two times better distance resolution as the standard pulse compression.

  20. Image-based ranging and guidance for rotorcraft

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1991-01-01

    This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.

  1. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  2. Determination of fat and total protein content in milk using conventional digital imaging.

    PubMed

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-04-01

    The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow's milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R(2)=0.974 for protein and R(2)=0.973 for fat content). However when analysed the sets jointly with the obtained results were significantly worse (best models with cross-validated R(2)=0.890 for fat content and R(2)=0.720 for protein content). The results have been compared with previously published Vis/SW-NIR spectroscopic study of similar samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. High-resolution 3D MR spectroscopic imaging of the prostate at 3 T with the MLEV-PRESS sequence.

    PubMed

    Chen, Albert P; Cunningham, Charles H; Kurhanewicz, John; Xu, Duan; Hurd, Ralph E; Pauly, John M; Carvajal, Lucas; Karpodinis, Kostas; Vigneron, Daniel B

    2006-09-01

    A 3 T MLEV-point-resolved spectroscopy (PRESS) sequence employing optimized spectral-spatial and very selective outer-voxel suppression pulses was tested in 25 prostate cancer patients. At an echo time of 85 ms, the MLEV-PRESS sequence resulted in maximally upright inner resonances and minimal outer resonances of the citrate doublet of doublets. Magnetic resonance spectroscopic imaging (MRSI) exams performed at both 3 and 1.5 T for 10 patients demonstrated a 2.08+/-0.36-fold increase in signal-to-noise ratio (SNR) at 3 T as compared with 1.5 T for the center citrate resonances. This permitted the acquisition of MRSI data with a nominal spatial resolution of 0.16 cm3 at 3 T with similar SNR as the 0.34-cm3 data acquired at 1.5 T. Due to the twofold increase in spectral resolution at 3 T and the improved magnetic field homogeneity provided by susceptibility-matched endorectal coils, the choline resonance was better resolved from polyamine and creatine resonances as compared with 1.5 T spectra. In prostate cancer patients, the elevation of choline and the reduction of polyamines were more clearly observed at 3 T, as compared with 1.5 T MRSI. The increased SNR and corresponding spatial resolution obtainable at 3 T reduced partial volume effects and allowed improved detection of the presence and extent of abnormal metabolite levels in prostate cancer patients, as compared with 1.5 T MRSI.

  4. Laser Cooling and Trapping of Neutral Strontium for Spectroscopic Measurements of Casimir-Polder Potentials

    NASA Astrophysics Data System (ADS)

    Cook, Eryn C.

    Casimir and Casimir-Polder effects are forces between electrically neutral bodies and particles in vacuum, arising entirely from quantum fluctuations. The modification to the vacuum electromagnetic-field modes imposed by the presence of any particle or surface can result in these mechanical forces, which are often the dominant interaction at small separations. These effects play an increasingly critical role in the operation of micro- and nano-mechanical systems as well as miniaturized atomic traps for precision sensors and quantum-information devices. Despite their fundamental importance, calculations present theoretical and numeric challenges, and precise atom-surface potential measurements are lacking in many geometric and distance regimes. The spectroscopic measurement of Casimir-Polder-induced energy level shifts in optical-lattice trapped atoms offers a new experimental method to probe atom-surface interactions. Strontium, the current front-runner among optical frequency metrology systems, has demonstrated characteristics ideal for such precision measurements. An alkaline earth atom possessing ultra-narrow intercombination transitions, strontium can be loaded into an optical lattice at the "magic" wavelength where the probe transition is unperturbed by the trap light. Translation of the lattice will permit controlled transport of tightly-confined atomic samples to well-calibrated atom-surface separations, while optical transition shifts serve as a direct probe of the Casimir-Polder potential. We have constructed a strontium magneto-optical trap (MOT) for future Casimir-Polder experiments. This thesis will describe the strontium apparatus, initial trap performance, and some details of the proposed measurement procedure.

  5. Spectrophotometry of Dust in Comet Hale-Bopp

    NASA Technical Reports Server (NTRS)

    Witteborn, Fred C. (Technical Monitor)

    1997-01-01

    Comets, such as Hale-Bopp (C/1995 O1), are frozen reservoirs of primitive solar nebula dust grains and ices. Analysis of the composition of cometary dust grains from infrared spectroscopic techniques permits an estimation of the types of organic and inorganic materials that constituted the early primitive solar nebula. In addition, the cometary bombardment of the Earth (approximately 3.5 Gy ago) supplied the water for the oceans and brought organic materials to Earth which may have been biogenic. Spectroscopic observations of comet Hale-Bopp suggest the possible presence of organic hydrocarbon species, silicate and olivine dust grains, and water ice. Spectroscopy near 3 microns obtained in Nov 1996 r=2.393 AU, delta=3.034 AU) shows a feature which we attribute to PAH emission. The spatial morphology of the 3.28 microns PAH feature is also presented. Optical and infrared spectrophotometric observations of comets convey valuable information about the spatial distribution and properties of dust and gas within the inner coma. In the optical and NIR shortward of 2 microns, the observed light is primarily scattered sunlight from the dust grains. At longer wavelengths, particularly in the 10 gm window, thermal emission from these grains dominates the radiation allowing an accurate estimate of grain sizes and chemical composition. Here we present an initial analysis of spectra taken with the NASA HIFOGS at 7-14 microns as part of a multiwavelength temporal study of the "comet of the century".

  6. Three-dimensional Hadamard-encoded proton spectroscopic imaging in the human brain using time-cascaded pulses at 3 Tesla.

    PubMed

    Cohen, Ouri; Tal, Assaf; Gonen, Oded

    2014-10-01

    To reduce the specific-absorption-rate (SAR) and chemical shift displacement (CSD) of three-dimensional (3D) Hadamard spectroscopic imaging (HSI) and maintain its point spread function (PSF) benefits. A 3D hybrid of 2D longitudinal, 1D transverse HSI (L-HSI, T-HSI) sequence is introduced and demonstrated in a phantom and the human brain at 3 Tesla (T). Instead of superimposing each of the selective Hadamard radiofrequency (RF) pulses with its N single-slice components, they are cascaded in time, allowing N-fold stronger gradients, reducing the CSD. A spatially refocusing 180° RF pulse following the T-HSI encoding block provides variable, arbitrary echo time (TE) to eliminate undesirable short T2 species' signals, e.g., lipids. The sequence yields 10-15% better signal-to-noise ratio (SNR) and 8-16% less signal bleed than 3D chemical shift imaging of equal repetition time, spatial resolution and grid size. The 13 ± 6, 22 ± 7, 24 ± 8, and 31 ± 14 in vivo SNRs for myo-inositol, choline, creatine, and N-acetylaspartate were obtained in 21 min from 1 cm(3) voxels at TE ≈ 20 ms. Maximum CSD was 0.3 mm/ppm in each direction. The new hybrid HSI sequence offers a better localized PSF at reduced CSD and SAR at 3T. The short and variable TE permits acquisition of short T2 and J-coupled metabolites with higher SNR. Copyright © 2013 Wiley Periodicals, Inc.

  7. Rating Movies and Rating the Raters Who Rate Them

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2010-01-01

    The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data. PMID:20802818

  8. Rating Movies and Rating the Raters Who Rate Them.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2009-11-01

    The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.

  9. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  10. Pricing of swing options: A Monte Carlo simulation approach

    NASA Astrophysics Data System (ADS)

    Leow, Kai-Siong

    We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.

  11. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  12. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    PubMed

    Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji

    2010-02-01

    Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

  13. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians

    PubMed Central

    He, Xiao-Ou; D’Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-01-01

    Background: Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. Aims: We examined how two different SIAs may influence decision making among primary-care physicians. Methods: Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. Results: We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a ‘normal’ interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. Conclusions: This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently. PMID:25763716

  14. Deformable structure registration of bladder through surface mapping.

    PubMed

    Xiong, Li; Viswanathan, Akila; Stewart, Alexandra J; Haker, Steven; Tempany, Clare M; Chin, Lee M; Cormack, Robert A

    2006-06-01

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractions of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Xiong; Viswanathan, Akila; Stewart, Alexandra J.

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractionsmore » of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.« less

  16. Recovering the colour-dependent albedo of exoplanets with high-resolution spectroscopy: from ESPRESSO to the ELT.

    NASA Astrophysics Data System (ADS)

    Martins, J. H. C.; Figueira, P.; Santos, N. C.; Melo, C.; Garcia Muñoz, A.; Faria, J.; Pepe, F.; Lovis, C.

    2018-05-01

    The characterization of planetary atmospheres is a daunting task, pushing current observing facilities to their limits. The next generation of high-resolution spectrographs mounted on large telescopes - such as ESPRESSO@VLT and HIRES@ELT - will allow us to probe and characterize exoplanetary atmospheres in greater detail than possible to this point. We present a method that permits the recovery of the colour-dependent reflectivity of exoplanets from high-resolution spectroscopic observations. Determining the wavelength-dependent albedo will provide insight into the chemical properties and weather of the exoplanet atmospheres. For this work, we simulated ESPRESSO@VLT and HIRES@ELT high-resolution observations of known planetary systems with several albedo configurations. We demonstrate how the cross correlation technique applied to theses simulated observations can be used to successfully recover the geometric albedo of exoplanets over a range of wavelengths. In all cases, we were able to recover the wavelength dependent albedo of the simulated exoplanets and distinguish between several atmospheric models representing different atmospheric configurations. In brief, we demonstrate that the cross correlation technique allows for the recovery of exoplanetary albedo functions from optical observations with the next generation of high-resolution spectrographs that will be mounted on large telescopes with reasonable exposure times. Its recovery will permit the characterization of exoplanetary atmospheres in terms of composition and dynamics and consolidates the cross correlation technique as a powerful tool for exoplanet characterization.

  17. A NUMERICAL ALGORITHM FOR MODELING MULTIGROUP NEUTRINO-RADIATION HYDRODYNAMICS IN TWO SPATIAL DIMENSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swesty, F. Douglas; Myra, Eric S.

    It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less

  18. An Environmental for Hardware-in-the-Loop Formation Navigation and Control

    NASA Technical Reports Server (NTRS)

    Burns, Rich; Naasz, Bo; Gaylor, Dave; Higinbotham, John

    2004-01-01

    Recent interest in formation flying satellite systems has spurred a considerable amount of research in the relative navigation and control of satellites. Development in this area has included new estimation and control algorithms as well as sensor and actuator development specifically geared toward the relative control problem. This paper describes a simulation facility, the Formation Flying Test Bed (FFTB) at NASA Goddard Space Flight Center, which allows engineers to test new algorithms for the formation flying problem with relevant GN&C hardware in a closed loop simulation. The FFTB currently supports the inclusion of GPS receiver hardware in the simulation loop. Support for satellite crosslink ranging technology is at a prototype stage. This closed-loop, hardware inclusive simulation capability permits testing of navigation and control software in the presence of the actual hardware with which the algorithms must interact. This capability provides the navigation or control developer with a perspective on how the algorithms perform as part of the closed-loop system. In this paper, the overall design and evolution of the FFTB are presented. Each component of the FFTB is then described. Interfaces between the components of the FFTB are shown and the interfaces to and between navigation and control software are described. Finally, an example of closed-loop formation control with GPS receivers in the loop is presented.

  19. Hyperpolarized 13C pyruvate mouse brain metabolism with absorptive-mode EPSI at 1 T

    NASA Astrophysics Data System (ADS)

    Miloushev, Vesselin Z.; Di Gialleonardo, Valentina; Salamanca-Cardona, Lucia; Correa, Fabian; Granlund, Kristin L.; Keshari, Kayvan R.

    2017-02-01

    The expected signal in echo-planar spectroscopic imaging experiments was explicitly modeled jointly in spatial and spectral dimensions. Using this as a basis, absorptive-mode type detection can be achieved by appropriate choice of spectral delays and post-processing techniques. We discuss the effects of gradient imperfections and demonstrate the implementation of this sequence at low field (1.05 T), with application to hyperpolarized [1-13C] pyruvate imaging of the mouse brain. The sequence achieves sufficient signal-to-noise to monitor the conversion of hyperpolarized [1-13C] pyruvate to lactate in the mouse brain. Hyperpolarized pyruvate imaging of mouse brain metabolism using an absorptive-mode EPSI sequence can be applied to more sophisticated murine disease and treatment models. The simple modifications presented in this work, which permit absorptive-mode detection, are directly translatable to human clinical imaging and generate improved absorptive-mode spectra without the need for refocusing pulses.

  20. Fourier domain low coherence interferometry for detection of early colorectal cancer development in the AOM rat model

    NASA Astrophysics Data System (ADS)

    Robles, Francisco E.; Zhu, Yizheng; Lee, Jin; Sharma, Sheela; Wax, Adam

    2011-03-01

    We present Fourier domain low coherence interferometry (fLCI) applied to the detection of preneoplastic changes in the colon using the ex-vivo azoxymethane (AOM) rat carcinogenesis model. fLCI measures depth resolved spectral oscillations, also known as local oscillations, resulting from coherent fields induced by the scattering of cell nuclei. The depth resolution of fLCI permits nuclear morphology measurements within thick tissues, making the technique sensitive to the earliest stages of precancerous development. To achieve depth resolved spectroscopic analysis, we use the dual window method, which obtains simultaneously high spectral and depth resolution and yields access to the local oscillations. The results show highly statistically significant differences between the AOM-treated and control group samples. Further, the results suggest that fLCI may be used to detect the field effect of carcinogenesis, in addition to identifying specific areas where more advanced neoplastic development has occurred.

  1. Method for quantitative determination and separation of trace amounts of chemical elements in the presence of large quantities of other elements having the same atomic mass

    DOEpatents

    Miller, C.M.; Nogar, N.S.

    1982-09-02

    Photoionization via autoionizing atomic levels combined with conventional mass spectroscopy provides a technique for quantitative analysis of trace quantities of chemical elements in the presence of much larger amounts of other elements with substantially the same atomic mass. Ytterbium samples smaller than 10 ng have been detected using an ArF* excimer laser which provides the atomic ions for a time-of-flight mass spectrometer. Elemental selectivity of greater than 5:1 with respect to lutetium impurity has been obtained. Autoionization via a single photon process permits greater photon utilization efficiency because of its greater absorption cross section than bound-free transitions, while maintaining sufficient spectroscopic structure to allow significant photoionization selectivity between different atomic species. Separation of atomic species from others of substantially the same atomic mass is also described.

  2. The platinum microelectrode/Nafion interface - An electrochemical impedance spectroscopic analysis of oxygen reduction kinetics and Nafion characteristics

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Arvind; Dave, Bhasker; Srinivasan, Supramaniam; Appleby, John A.; Martin, Charles R.

    1992-01-01

    The objectives of this study were to use electrochemical impedance spectroscopy (EIS) to study the oxygen-reduction reaction under lower humidification conditions than previously studied. The EIS technique permits the discrimination of electrode kinetics of oxygen reduction, mass transport of O2 in the membrane, and the electrical characteristics of the membrane. Electrode-kinetic parameters for the oxygen-reduction reaction, corrosion current densities for Pt, and double-layer capacitances were calculated. The production of water due to electrochemical reduction of oxygen greatly influenced the EIS response and the electrode kinetics at the Pt/Nafion interface. From the finite-length Warburg behavior, a measure of the diffusion coefficient of oxygen in Nafion and diffusion-layer thickness was obtained. An analysis of the EIS data in the high-frequency domain yielded membrane and interfacial characteristics such as ionic conductivity of the membrane, membrane grain-boundary capacitance and resistance, and uncompensated resistance.

  3. Markov state models and molecular alchemy

    NASA Astrophysics Data System (ADS)

    Schütte, Christof; Nielsen, Adam; Weber, Marcus

    2015-01-01

    In recent years, Markov state models (MSMs) have attracted a considerable amount of attention with regard to modelling conformation changes and associated function of biomolecular systems. They have been used successfully, e.g. for peptides including time-resolved spectroscopic experiments, protein function and protein folding , DNA and RNA, and ligand-receptor interaction in drug design and more complicated multivalent scenarios. In this article, a novel reweighting scheme is introduced that allows to construct an MSM for certain molecular system out of an MSM for a similar system. This permits studying how molecular properties on long timescales differ between similar molecular systems without performing full molecular dynamics simulations for each system under consideration. The performance of the reweighting scheme is illustrated for simple test cases, including one where the main wells of the respective energy landscapes are located differently and an alchemical transformation of butane to pentane where the dimension of the state space is changed.

  4. Improvements in Raman Lidar Measurements Using New Interference Filter Technology

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Potter, John R.; Tola, Rebecca; Veselovskii, Igor; Cadirola, Martin; Rush, Kurt; Comer, Joseph

    2006-01-01

    Narrow-band interference filters with improved transmission in the ultra-violet have been developed under NASA-funded research and used in the Raman Airborne Spectroscopic Lidar (RASL) in ground-based, upward-looking tests. Measurements were made of atmospheric water vapor, cirrus cloud optical properties and carbon dioxide that improve upon any previously demonstrated using Raman lidar. Daytime boundary and mixed layer profiling of water vapor mixing ratio up to an altitude of approximately 4 h is performed with less than 5% random error using temporal and spatial resolution of 2-minutes and 60 - 210, respectively. Daytime cirrus cloud optical depth and extinction-to-backscatter ratio measurements are made using 1 -minute average. Sufficient signal strength is demonstrated to permit the simultaneous profiling of carbon dioxide and water vapor mixing ratio into the free troposphere during the nighttime. A description of the filter technology developments is provided followed by examples of the improved Raman lidar measurements.

  5. Compact Micromachined Bandpass Filters for Infrared Planetary Spectroscopy

    NASA Technical Reports Server (NTRS)

    Brown, Ari D.; Aslam, Shahid; Chervenak, James A.; Huang, Wei-Chung; Merrell, Willie; Quijada, Manuel

    2011-01-01

    The thermal instrument strawman payload of the Jupiter Europa Orbiter on the Europa Jupiter Science Mission will map out thermal anomalies, the structure, and atmospheric conditions of Europa and Jupiter within the 7-100 micron spectral range. One key requirement for the payload is that the mass cannot exceed 3.7 kg. Consequently, a new generation of light-weight miniaturized spectrometers needs to be developed. On the path toward developing these spectrometers is development of ancillary miniaturized spectroscopic components. In this paper, we present a strategy for making radiation hard and low mass FIR band pass metal mesh filters. Our strategy involves using MEMS-based fabrication techniques, which will permit the quasi-optical filter structures to be made with micron-scale precision. This will enable us to achieve tight control over both the pass band of the filter and the micromachined silicon support structure architecture, which will facilitate integration of the filters for a variety of applications.

  6. Measuring changes in chemistry, composition, and molecular structure within hair fibers by infrared and Raman spectroscopic imaging.

    PubMed

    Zhang, Guojin; Senak, Laurence; Moore, David J

    2011-05-01

    Spatially resolved infrared (IR) and Raman images are acquired from human hair cross sections or intact hair fibers. The full informational content of these spectra are spatially correlated to hair chemistry, anatomy, and structural organization through univariate and multivariate data analysis. Specific IR and Raman images from untreated human hair describing the spatial dependence of lipid and protein distribution, protein secondary structure, lipid chain conformational order, and distribution of disulfide cross-links in hair protein are presented in this study. Factor analysis of the image plane acquired with IR microscopy in hair sections, permits delineation of specific micro-regions within the hair. These data indicate that both IR and Raman imaging of molecular structural changes in a specific region of hair will prove to be valuable tools in the understanding of hair structure, physiology, and the effect of various stresses upon its integrity.

  7. In-situ investigation of protein and DNA structure using UVRRS

    NASA Astrophysics Data System (ADS)

    Greek, L. Shane; Schulze, H. Georg; Blades, Michael W.; Haynes, Charles A.; Turner, Robin F. B.

    1997-05-01

    Ultraviolet resonance Raman spectroscopy (UVRRS) has the potential to become a sensitive, specific, versatile bioanalytical and biophysical technique for routine investigations of proteins, DNA, and their monomeric components, as well as a variety smaller, physiologically important aromatic molecules. The transition of UVRRS from a complex, specialized spectroscopic method to a common laboratory assay depends upon several developments, including a robust sample introduction method permitting routine, in situ analysis in standard laboratory environments. To this end, we recently reported the first fiber-optic probes suitable for deep-UV pulsed laser UVRRS. In this paper, we extend this work by demonstrating the applicability of such probes to studies of biochemical relevance, including investigations of the resonance enhancement of phosphotyrosine, thermal denaturation of RNase T1, and specific and non-specific protein binding. The advantages and disadvantages of the probes are discussed with reference to sample conditions and probe design considerations.

  8. The CASTLES Imaging Survey of Gravitational Lenses

    NASA Astrophysics Data System (ADS)

    Peng, C. Y.; Falco, E. E.; Lehar, J.; Impey, C. D.; Kochanek, C. S.; McLeod, B. A.; Rix, H.-W.

    1997-12-01

    The CASTLES survey (Cfa-Arizona-(H)ST-Lens-Survey) is imaging most known small-separation gravitational lenses (or lens candidates), using the NICMOS camera (mostly H-band) and the WFPC2 (V and I band) on HST. To date nearly half of the IR imaging survey has been completed. The main goals are: (1) to search for lens galaxies where none have been directly detected so far; (2) obtain photometric redshift estimates (VIH) for the lenses where no spectroscopic redshifts exist; (3) study and model the lens galaxies in detail, in part to study the mass distribution within them, in part to identify ``simple" systems that may permit accurate time delay estimates for H_0; (3) measure the M/L evolution of the sample of lens galaxies with look-back time (to z ~ 1); (4) determine directly which fraction of sources are lensed by ellipticals vs. spirals. We will present the survey specifications and the images obtained so far.

  9. A vibrational spectroscopic and principal component analysis of triarylmethane dyes by comparative laboratory and portable instrumentation

    NASA Astrophysics Data System (ADS)

    Doherty, B.; Vagnini, M.; Dufourmantelle, K.; Sgamellotti, A.; Brunetti, B.; Miliani, C.

    2014-03-01

    This contribution examines the utility of vibrational spectroscopy by bench and portable Raman/surface enhanced Raman and infrared methods for the investigation of ten early triarlymethane dye powder references and dye solutions applied on paper. The complementary information afforded by the techniques is shown to play a key role in the identification of specific spectral marker ranges to distiguish early synthetic dyes of art-historical interest through the elaboration of an in-house database of modern organic dyes. Chemometric analysis has permitted a separation of data by the discrimination of di-phenyl-naphthalenes and triphenylmethanes (di-amino and tri-amino derivatives). This work serves as a prelude to the validation of a non-invasive working method for in situ characterization of these synthetic dyes through a careful comparison of respective strengths and limitations of each portable technique.

  10. United time-frequency spectroscopy for dynamics and global structure.

    PubMed

    Marian, Adela; Stowe, Matthew C; Lawall, John R; Felinto, Daniel; Ye, Jun

    2004-12-17

    Ultrashort laser pulses have thus far been used in two distinct modes. In the time domain, the pulses have allowed probing and manipulation of dynamics on a subpicosecond time scale. More recently, phase stabilization has produced optical frequency combs with absolute frequency reference across a broad bandwidth. Here we combine these two applications in a spectroscopic study of rubidium atoms. A wide-bandwidth, phase-stabilized femtosecond laser is used to monitor the real-time dynamic evolution of population transfer. Coherent pulse accumulation and quantum interference effects are observed and well modeled by theory. At the same time, the narrow linewidth of individual comb lines permits a precise and efficient determination of the global energy-level structure, providing a direct connection among the optical, terahertz, and radio-frequency domains. The mechanical action of the optical frequency comb on the atomic sample is explored and controlled, leading to precision spectroscopy with an appreciable reduction in systematic errors.

  11. Characterization of blue decorated Renaissance pottery fragments from Caltagirone (Sicily, Italy)

    NASA Astrophysics Data System (ADS)

    Barilaro, D.; Crupi, V.; Interdonato, S.; Majolino, D.; Venuti, V.; Barone, G.; La Russa, M. F.; Bardelli, F.

    2008-07-01

    Renaissance blue decorated pottery fragments from the archaeological site of Caltagirone (Sicily, Italy) were analysed by scanning electron microscopy - energy dispersive X-ray spectrometry (SEM/EDS). The samples were dated back to 16th century AD on the basis of archaeological observations. The micro-chemical analyses were performed on the ceramic body and the surface decorated layer of the samples. Particularly, the investigation was addressed the characterization of the coating blue decorations. The obtained results allowed us to clearly identify smalt as pigment. Also the presence of arsenic (As) was revealed and the Co/As ratio values were calculated and related to the different process used for the pigment preparation. Further spectroscopic analyses, performed through X-ray absorbance spectroscopy (XAS), carried out at the Co K-edge, confirmed the micro-analytical results and permitted us to identify the oxidation form and the local environment of cobalt atoms.

  12. Pristine Early Eocene wood buried deeply in kimberlite from northern Canada.

    PubMed

    Wolfe, Alexander P; Csank, Adam Z; Reyes, Alberto V; McKellar, Ryan C; Tappert, Ralf; Muehlenbachs, Karlis

    2012-01-01

    We report exceptional preservation of fossil wood buried deeply in a kimberlite pipe that intruded northwestern Canada's Slave Province 53.3±0.6 million years ago (Ma), revealed during excavation of diamond source rock. The wood originated from forest surrounding the eruption zone and collapsed into the diatreme before resettling in volcaniclastic kimberlite to depths >300 m, where it was mummified in a sterile environment. Anatomy of the unpermineralized wood permits conclusive identification to the genus Metasequoia (Cupressaceae). The wood yields genuine cellulose and occluded amber, both of which have been characterized spectroscopically and isotopically. From cellulose δ(18)O and δ(2)H measurements, we infer that Early Eocene paleoclimates in the western Canadian subarctic were 12-17°C warmer and four times wetter than present. Canadian kimberlites offer Lagerstätte-quality preservation of wood from a region with limited alternate sources of paleobotanical information.

  13. Pristine Early Eocene Wood Buried Deeply in Kimberlite from Northern Canada

    PubMed Central

    Wolfe, Alexander P.; Csank, Adam Z.; Reyes, Alberto V.; McKellar, Ryan C.; Tappert, Ralf; Muehlenbachs, Karlis

    2012-01-01

    We report exceptional preservation of fossil wood buried deeply in a kimberlite pipe that intruded northwestern Canada’s Slave Province 53.3±0.6 million years ago (Ma), revealed during excavation of diamond source rock. The wood originated from forest surrounding the eruption zone and collapsed into the diatreme before resettling in volcaniclastic kimberlite to depths >300 m, where it was mummified in a sterile environment. Anatomy of the unpermineralized wood permits conclusive identification to the genus Metasequoia (Cupressaceae). The wood yields genuine cellulose and occluded amber, both of which have been characterized spectroscopically and isotopically. From cellulose δ18O and δ2H measurements, we infer that Early Eocene paleoclimates in the western Canadian subarctic were 12–17°C warmer and four times wetter than present. Canadian kimberlites offer Lagerstätte-quality preservation of wood from a region with limited alternate sources of paleobotanical information. PMID:23029080

  14. Combination of laser-induced breakdown spectroscopy and Raman spectroscopy for multivariate classification of bacteria

    NASA Astrophysics Data System (ADS)

    Prochazka, D.; Mazura, M.; Samek, O.; Rebrošová, K.; Pořízka, P.; Klus, J.; Prochazková, P.; Novotný, J.; Novotný, K.; Kaiser, J.

    2018-01-01

    In this work, we investigate the impact of data provided by complementary laser-based spectroscopic methods on multivariate classification accuracy. Discrimination and classification of five Staphylococcus bacterial strains and one strain of Escherichia coli is presented. The technique that we used for measurements is a combination of Raman spectroscopy and Laser-Induced Breakdown Spectroscopy (LIBS). Obtained spectroscopic data were then processed using Multivariate Data Analysis algorithms. Principal Components Analysis (PCA) was selected as the most suitable technique for visualization of bacterial strains data. To classify the bacterial strains, we used Neural Networks, namely a supervised version of Kohonen's self-organizing maps (SOM). We were processing results in three different ways - separately from LIBS measurements, from Raman measurements, and we also merged data from both mentioned methods. The three types of results were then compared. By applying the PCA to Raman spectroscopy data, we observed that two bacterial strains were fully distinguished from the rest of the data set. In the case of LIBS data, three bacterial strains were fully discriminated. Using a combination of data from both methods, we achieved the complete discrimination of all bacterial strains. All the data were classified with a high success rate using SOM algorithm. The most accurate classification was obtained using a combination of data from both techniques. The classification accuracy varied, depending on specific samples and techniques. As for LIBS, the classification accuracy ranged from 45% to 100%, as for Raman Spectroscopy from 50% to 100% and in case of merged data, all samples were classified correctly. Based on the results of the experiments presented in this work, we can assume that the combination of Raman spectroscopy and LIBS significantly enhances discrimination and classification accuracy of bacterial species and strains. The reason is the complementarity in obtained chemical information while using these two methods.

  15. Organic Scintillation Detectors for Spectroscopic Radiation Portal Monitors

    NASA Astrophysics Data System (ADS)

    Paff, Marc Gerrit

    Thousands of radiation portal monitors have been deployed worldwide to detect and deter the smuggling of nuclear and radiological materials that could be used in nefarious acts. Radiation portal monitors are often installed at bottlenecks where large amounts of people or goods must traverse. Examples of use include scanning cargo containers at shipping ports, vehicles at border crossings, and people at high profile functions and events. Traditional radiation portal monitors contain separate detectors for passively measuring neutron and gamma ray count rates. 3He tubes embedded in polyethylene and slabs of plastic scintillators are the most common detector materials used in radiation portal monitors. The radiation portal monitor alarm mechanism relies on measuring radiation count rates above user defined alarm thresholds. These alarm thresholds are set above natural background count rates. Minimizing false alarms caused by natural background and maximizing sensitivity to weakly emitting threat sources must be balanced when setting these alarm thresholds. Current radiation portal monitor designs suffer from frequent nuisance radiation alarms. These radiation nuisance alarms are most frequently caused by shipments of large quantities of naturally occurring radioactive material containing cargo, like kitty litter, as well as by humans who have recently undergone a nuclear medicine procedure, particularly 99mTc treatments. Current radiation portal monitors typically lack spectroscopic capabilities, so nuisance alarms must be screened out in time-intensive secondary inspections with handheld radiation detectors. Radiation portal monitors using organic liquid scintillation detectors were designed, built, and tested. A number of algorithms were developed to perform on-the-fly radionuclide identification of single and combination radiation sources moving past the portal monitor at speeds up to 2.2 m/s. The portal monitor designs were tested extensively with a variety of shielded and unshielded radiation sources, including special nuclear material, at the European Commission Joint Research Centre in Ispra, Italy. Common medical isotopes were measured at the C.S. Mott Children's Hospital and added to the radionuclide identification algorithms.

  16. The DEEP2 Galaxy Redshift Survey: The Voronoi-Delaunay Method Catalog of Galaxy Groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerke, Brian F.; /UC, Berkeley; Newman, Jeffrey A.

    2012-02-14

    We use the first 25% of the DEEP2 Galaxy Redshift Survey spectroscopic data to identify groups and clusters of galaxies in redshift space. The data set contains 8370 galaxies with confirmed redshifts in the range 0.7 {<=} z {<=} 1.4, over one square degree on the sky. Groups are identified using an algorithm (the Voronoi-Delaunay Method) that has been shown to accurately reproduce the statistics of groups in simulated DEEP2-like samples. We optimize this algorithm for the DEEP2 survey by applying it to realistic mock galaxy catalogs and assessing the results using a stringent set of criteria for measuring group-findingmore » success, which we develop and describe in detail here. We find in particular that the group-finder can successfully identify {approx}78% of real groups and that {approx}79% of the galaxies that are true members of groups can be identified as such. Conversely, we estimate that {approx}55% of the groups we find can be definitively identified with real groups and that {approx}46% of the galaxies we place into groups are interloper field galaxies. Most importantly, we find that it is possible to measure the distribution of groups in redshift and velocity dispersion, n({sigma}, z), to an accuracy limited by cosmic variance, for dispersions greater than 350 km s{sup -1}. We anticipate that such measurements will allow strong constraints to be placed on the equation of state of the dark energy in the future. Finally, we present the first DEEP2 group catalog, which assigns 32% of the galaxies to 899 distinct groups with two or more members, 153 of which have velocity dispersions above 350 km s{sup -1}. We provide locations, redshifts and properties for this high-dispersion subsample. This catalog represents the largest sample to date of spectroscopically detected groups at z {approx} 1.« less

  17. VizieR Online Data Catalog: Redshift reliability flags (VVDS data) (Jamal+, 2018)

    NASA Astrophysics Data System (ADS)

    Jamal, S.; Le Brun, V.; Le Fevre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2017-09-01

    The VIMOS VLT Deep Survey (Le Fevre et al. 2013A&A...559A..14L) is a combination of 3 i-band magnitude limited surveys: Wide (17.5<=iAB<=22.5; 8.6deg2), Deep (17.5<=iAB<=24; 0.6deg2) and Ultra-Deep (23<=iAB<=24.75; 512arcmin2), that produced a total of 35526 spectroscopic galaxy redshifts between 0 and 6.7 (22434 in Wide, 12051 in Deep and 1041 in UDeep). We supplement spectra of the VIMOS VLT Deep Survey (VVDS) with newly-defined redshift reliability flags obtained from clustering (unsupervised classification in Machine Learning) a set of descriptors from individual zPDFs. In this paper, we exploit a set of 24519 spectra from the VVDS database. After computing zPDFs for each individual spectrum, a set of (8) descriptors of the zPDF are extracted to build a feature matrix X (dimension = 24519 rows, 8 columns). Then, we use a clustering (unsupervised algorithms in Machine Learning) algorithm to partition the feature space into distinct clusters (5 clusters: C1,C2,C3,C4,C5), each depicting a different level of confidence to associate with the measured redshift zMAP (Maximum-A-Posteriori estimate that corresponds to the maximum of the redshift PDF). The clustering results (C1,C2,C3,C4,C5) reported in the table are those used in the paper (Jamal et al, 2017) to present the new methodology of automating the zspec reliability assessment. In particular, we would like to point out that they were obtained from first tests conducted on the VVDS spectroscopic data (end of 2016). Therefore, the table does not depict immutable results (on-going improvements). Future updates of the VVDS redshift reliability flags can be expected. (1 data file).

  18. Algorithmic analysis of relational learning processes in instructional technology: Some implications for basic, translational, and applied research.

    PubMed

    McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David

    2018-07-01

    A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Revisiting Abell 2744: a powerful synergy of GLASS spectroscopy and HFF photometry

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Wang

    We present new emission line identifications and improve the lensing reconstruction of the mass distribution of galaxy cluster Abell 2744 using the Grism Lens-Amplified Survey from Space (GLASS) spectroscopy and the Hubble Frontier Fields (HFF) imaging. We performed blind and targeted searches for faint line emitters on all objects, including the arc sample, within the field of view (FoV) of GLASS prime pointings. We report 55 high quality spectroscopic redshifts, 5 of which are for arc images. We also present an extensive analysis based on the HFF photometry, measuring the colors and photometric redshifts of all objects within the FoV, and comparing the spectroscopic and photometric redshift estimates. In order to improve the lens model of Abell 2744, we develop a rigorous algorithm to screen arc images, based on their colors and morphology, and selecting the most reliable ones to use. As a result, 25 systems (corresponding to 72 images) pass the screening process and are used to reconstruct the gravitational potential of the cluster pixellated on an adaptive mesh. The resulting total mass distribution is compared with a stellar mass map obtained from the Spitzer Frontier Fields data in order to study the relative distribution of stars and dark matter in the cluster.

  20. Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils

    PubMed Central

    Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng

    2017-01-01

    This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412

Top