NASA Astrophysics Data System (ADS)
Pavlov, S. S.; Dmitriev, A. Yu.; Chepurchenko, I. A.; Frontasyeva, M. V.
2014-11-01
The automation system for measurement of induced activity of gamma-ray spectra for multi-element high volume neutron activation analysis (NAA) was designed, developed and implemented at the reactor IBR-2 at the Frank Laboratory of Neutron Physics. The system consists of three devices of automatic sample changers for three Canberra HPGe detector-based gamma spectrometry systems. Each sample changer consists of two-axis of linear positioning module M202A by DriveSet company and disk with 45 slots for containers with samples. Control of automatic sample changer is performed by the Xemo S360U controller by Systec company. Positioning accuracy can reach 0.1 mm. Special software performs automatic changing of samples and measurement of gamma spectra at constant interaction with the NAA database.
Vibrational energy distribution analysis (VEDA): scopes and limitations.
Jamróz, Michał H
2013-10-01
The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Vibrational Energy Distribution Analysis (VEDA): Scopes and limitations
NASA Astrophysics Data System (ADS)
Jamróz, Michał H.
2013-10-01
The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis.
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
NASA Astrophysics Data System (ADS)
Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.
2016-01-01
We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.
Analysis of spectra using correlation functions
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Norton, Robert H.
1988-01-01
A novel method is presented for the quantitative analysis of spectra based on the properties of the cross correlation between a real spectrum and either a numerical synthesis or laboratory simulation. A new goodness-of-fit criterion called the heteromorphic coefficient H is proposed that has the property of being zero when a fit is achieved and varying smoothly through zero as the iteration proceeds, providing a powerful tool for automatic or near-automatic analysis. It is also shown that H can be rendered substantially noise-immune, permitting the analysis of very weak spectra well below the apparent noise level and, as a byproduct, providing Doppler shift and radial velocity information with excellent precision. The technique is in regular use in the Atmospheric Trace Molecule Spectroscopy (ATMOS) project and operates in an interactive, realtime computing environment with turn-around times of a few seconds or less.
The Automatic Recognition of the Abnormal Sky-subtraction Spectra Based on Hadoop
NASA Astrophysics Data System (ADS)
An, An; Pan, Jingchang
2017-10-01
The skylines, superimposing on the target spectrum as a main noise, If the spectrum still contains a large number of high strength skylight residuals after sky-subtraction processing, it will not be conducive to the follow-up analysis of the target spectrum. At the same time, the LAMOST can observe a quantity of spectroscopic data in every night. We need an efficient platform to proceed the recognition of the larger numbers of abnormal sky-subtraction spectra quickly. Hadoop, as a distributed parallel data computing platform, can deal with large amounts of data effectively. In this paper, we conduct the continuum normalization firstly and then a simple and effective method will be presented to automatic recognize the abnormal sky-subtraction spectra based on Hadoop platform. Obtain through the experiment, the Hadoop platform can implement the recognition with more speed and efficiency, and the simple method can recognize the abnormal sky-subtraction spectra and find the abnormal skyline positions of different residual strength effectively, can be applied to the automatic detection of abnormal sky-subtraction of large number of spectra.
NASA Technical Reports Server (NTRS)
Goldman, Aaron
1999-01-01
The Langley-D.U. collaboration on the analysis of high resolution infrared atmospheric spectra covered a number of important studies of trace gases identification and quantification from field spectra, and spectral line parameters analysis. The collaborative work included: Quantification and monitoring of trace gases from ground-based spectra available from various locations and seasons and from balloon flights. Studies toward identification and quantification of isotopic species, mostly oxygen and Sulfur isotopes. Search for new species on the available spectra. Update of spectroscopic line parameters, by combining laboratory and atmospheric spectra with theoretical spectroscopy methods. Study of trends of atmosphere trace constituents. Algorithms developments, retrievals intercomparisons and automatization of the analysis of NDSC spectra, for both column amounts and vertical profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lasche, George P.
2009-10-01
Cambio is an application intended to automatically read and display any spectrum file of any format in the world that the nuclear emergency response community might encounter. Cambio also provides an analysis capability suitable for HPGe spectra when detector response and scattering environment are not well known. Why is Cambio needed: (1) Cambio solves the following problem - With over 50 types of formats from instruments used in the field and new format variations appearing frequently, it is impractical for every responder to have current versions of the manufacturer's software from every instrument used in the field; (2) Cambio convertsmore » field spectra to any one of several common formats that are used for analysis, saving valuable time in an emergency situation; (3) Cambio provides basic tools for comparing spectra, calibrating spectra, and isotope identification with analysis suited especially for HPGe spectra; and (4) Cambio has a batch processing capability to automatically translate a large number of archival spectral files of any format to one of several common formats, such as the IAEA SPE or the DHS N42. Currently over 540 analysts and members of the nuclear emergency response community worldwide are on the distribution list for updates to Cambio. Cambio users come from all levels of government, university, and commercial partners around the world that support efforts to counter terrorist nuclear activities. Cambio is Unclassified Unlimited Release (UUR) and distributed by internet downloads with email notifications whenever a new build of Cambio provides for new formats, bug fixes, or new or improved capabilities. Cambio is also provided as a DLL to the Karlsruhe Institute for Transuranium Elements so that Cambio's automatic file-reading capability can be included at the Nucleonica web site.« less
NASA Technical Reports Server (NTRS)
Goldman, A.
2002-01-01
The Langley-D.U. collaboration on the analysis of high resolultion infrared atmospheric spectra covered a number of important studies of trace gases identification and quantification from field spectra, and spectral line parameters analysis. The collaborative work included: 1) Quantification and monitoring of trace gases from ground-based spectra available from various locations and seasons and from balloon flights; 2) Identification and preliminary quantification of several isotopic species, including oxygen and Sulfur isotopes; 3) Search for new species on the available spectra, including the use of selective coadding of ground-based spectra for high signal to noise; 4) Update of spectroscopic line parameters, by combining laboratory and atmospheric spectra with theoretical spectroscopy methods; 5) Study of trends and correlations of atmosphere trace constituents; and 6) Algorithms developments, retrievals intercomparisons and automatization of the analysis of NDSC spectra, for both column amounts and vertical profiles.
FAMA: Fast Automatic MOOG Analysis
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2014-02-01
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
Garrett, Daniel S; Gronenborn, Angela M; Clore, G Marius
2011-12-01
The Contour Approach to Peak Picking was developed to aid in the analysis and interpretation and of multidimensional NMR spectra of large biomolecules. In essence, it comprises an interactive graphics software tool to computationally select resonance positions in heteronuclear, 3- and 4D spectra. Copyright © 2011. Published by Elsevier Inc.
Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien
2016-01-01
We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.
NASA Astrophysics Data System (ADS)
Bernard, D.; Serot, O.; Simon, E.; Boucher, L.; Plumeri, S.
2018-01-01
The photon interrogation analysis is a nondestructive technique allowing to identify and quantify fissile materials in nuclear waste packages. This paper details an automatic procedure which has been developed to simulate the delayed γ-ray spectra for several actinide photofissions. This calculation tool will be helpful for the fine conception (collimation, shielding, noise background optimizations, etc.) and for the on-line analysis of such a facility.
NASA Technical Reports Server (NTRS)
Wardroper, A. M. K.; Brooks, P. W.; Humberston, M. J.; Maxwell, J. R.
1977-01-01
A computer method is described for the automatic classification of triterpanes and steranes into gross structural type from their mass spectral characteristics. The method has been applied to the spectra obtained by gas-chromatographic/mass-spectroscopic analysis of two mixtures of standards and of hydrocarbon fractions isolated from Green River and Messel oil shales. Almost all of the steranes and triterpanes identified previously in both shales were classified, in addition to a number of new components. The results indicate that classification of such alkanes is possible with a laboratory computer system. The method has application to diagenesis and maturation studies as well as to oil/oil and oil/source rock correlations in which rapid screening of large numbers of samples is required.
NASA Astrophysics Data System (ADS)
Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi
2016-10-01
The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.
NASA Astrophysics Data System (ADS)
Gelfusa, M.; Murari, A.; Lungaroni, M.; Malizia, A.; Parracino, S.; Peluso, E.; Cenciarelli, O.; Carestia, M.; Pizzoferrato, R.; Vega, J.; Gaudio, P.
2016-10-01
Two of the major new concerns of modern societies are biosecurity and biosafety. Several biological agents (BAs) such as toxins, bacteria, viruses, fungi and parasites are able to cause damage to living systems either humans, animals or plants. Optical techniques, in particular LIght Detection And Ranging (LIDAR), based on the transmission of laser pulses and analysis of the return signals, can be successfully applied to monitoring the release of biological agents into the atmosphere. It is well known that most of biological agents tend to emit specific fluorescence spectra, which in principle allow their detection and identification, if excited by light of the appropriate wavelength. For these reasons, the detection of the UVLight Induced Fluorescence (UV-LIF) emitted by BAs is particularly promising. On the other hand, the stand-off detection of BAs poses a series of challenging issues; one of the most severe is the automatic discrimination between various agents which emit very similar fluorescence spectra. In this paper, a new data analysis method, based on a combination of advanced filtering techniques and Support Vector Machines, is described. The proposed approach covers all the aspects of the data analysis process, from filtering and denoising to automatic recognition of the agents. A systematic series of numerical tests has been performed to assess the potential and limits of the proposed methodology. The first investigations of experimental data have already given very encouraging results.
XAP, a program for deconvolution and analysis of complex X-ray spectra
Quick, James E.; Haleby, Abdul Malik
1989-01-01
The X-ray analysis program (XAP) is a spectral-deconvolution program written in BASIC and specifically designed to analyze complex spectra produced by energy-dispersive X-ray analytical systems (EDS). XAP compensates for spectrometer drift, utilizes digital filtering to remove background from spectra, and solves for element abundances by least-squares, multiple-regression analysis. Rather than base analyses on only a few channels, broad spectral regions of a sample are reconstructed from standard reference spectra. The effects of this approach are (1) elimination of tedious spectrometer adjustments, (2) removal of background independent of sample composition, and (3) automatic correction for peak overlaps. Although the program was written specifically to operate a KEVEX 7000 X-ray fluorescence analytical system, it could be adapted (with minor modifications) to analyze spectra produced by scanning electron microscopes, electron microprobes, and probes, and X-ray defractometer patterns obtained from whole-rock powders.
Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien
2015-08-01
We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.
MARZ: Manual and automatic redshifting software
NASA Astrophysics Data System (ADS)
Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.
2016-04-01
The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.
Terahertz spectroscopic investigation of human gastric normal and tumor tissues
NASA Astrophysics Data System (ADS)
Hou, Dibo; Li, Xian; Cai, Jinhui; Ma, Yehao; Kang, Xusheng; Huang, Pingjie; Zhang, Guangxin
2014-09-01
Human dehydrated normal and cancerous gastric tissues were measured using transmission time-domain terahertz spectroscopy. Based on the obtained terahertz absorption spectra, the contrasts between the two kinds of tissue were investigated and techniques for automatic identification of cancerous tissue were studied. Distinctive differences were demonstrated in both the shape and amplitude of the absorption spectra between normal and tumor tissue. Additionally, some spectral features in the range of 0.2~0.5 THz and 1~1.5 THz were revealed for all cancerous gastric tissues. To systematically achieve the identification of gastric cancer, principal component analysis combined with t-test was used to extract valuable information indicating the best distinction between the two types. Two clustering approaches, K-means and support vector machine (SVM), were then performed to classify the processed terahertz data into normal and cancerous groups. SVM presented a satisfactory result with less false classification cases. The results of this study implicate the potential of the terahertz technique to detect gastric cancer. The applied data analysis methodology provides a suggestion for automatic discrimination of terahertz spectra in other applications.
SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments
NASA Technical Reports Server (NTRS)
Leonard, R. F.
1977-01-01
A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization
NASA Astrophysics Data System (ADS)
Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc
2002-09-01
A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.
Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition
NASA Astrophysics Data System (ADS)
Kim, Jonghwa; André, Elisabeth
This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.
A manual and an automatic TERS based virus discrimination
NASA Astrophysics Data System (ADS)
Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen
2015-02-01
Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07033j
A robust automatic phase correction method for signal dense spectra
NASA Astrophysics Data System (ADS)
Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang
2013-09-01
A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.
A data reduction package for multiple object spectroscopy
NASA Technical Reports Server (NTRS)
Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.
1986-01-01
Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.
NASA Astrophysics Data System (ADS)
Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale
2015-11-01
Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.
NASA Technical Reports Server (NTRS)
Bemra, R. S.; Rastogi, P. K.; Balsley, B. B.
1986-01-01
An analysis of frequency spectra at periods of about 5 days to 5 min from two 20-day sets of velocity measurements in the stratosphere and troposphere region obtained with the Poker Flat mesosphere-stratosphere-troposphere (MST) radar during January and June, 1984 is presented. A technique based on median filtering and averaged order statistics for automatic editing, smoothing and spectral analysis of velocity time series contaminated with spurious data points or outliers is outlined. The validity of this technique and its effects on the inferred spectral index was tested through simulation. Spectra obtained with this technique are discussed. The measured spectral indices show variability with season and height, especially across the tropopause. The discussion briefly outlines the need for obtaining better climatologies of velocity spectra and for the refinements of the existing theories to explain their behavior.
Camerlingo, Carlo; Zenone, Flora; Perna, Giuseppe; Capozzi, Vito; Cirillo, Nicola; Gaeta, Giovanni Maria; Lepore, Maria
2008-06-01
A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra of blood serum samples from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. Spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has been applied to the recorded Raman spectra in order to overcome problems related to low-level signals and the presence of noise and background components due to light scattering and fluorescence. This numerical data treatment can automatically extract quantitative information from the Raman spectra and makes more reliable the data comparison. Even if an exhaustive investigation has not been done in this work, the feasibility of the follow-up monitoring of pemphigus vulgaris pathology has been clearly proved with useful implications for the clinical applications.
Camerlingo, Carlo; Zenone, Flora; Perna, Giuseppe; Capozzi, Vito; Cirillo, Nicola; Gaeta, Giovanni Maria; Lepore, Maria
2008-01-01
A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra of blood serum samples from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. Spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has been applied to the recorded Raman spectra in order to overcome problems related to low-level signals and the presence of noise and background components due to light scattering and fluorescence. This numerical data treatment can automatically extract quantitative information from the Raman spectra and makes more reliable the data comparison. Even if an exhaustive investigation has not been done in this work, the feasibility of the follow-up monitoring of pemphigus vulgaris pathology has been clearly proved with useful implications for the clinical applications. PMID:27879899
Program Package for the Analysis of High Resolution High Signal-To-Noise Stellar Spectra
NASA Astrophysics Data System (ADS)
Piskunov, N.; Ryabchikova, T.; Pakhomov, Yu.; Sitnova, T.; Alekseeva, S.; Mashonkina, L.; Nordlander, T.
2017-06-01
The program package SME (Spectroscopy Made Easy), designed to perform an analysis of stellar spectra using spectral fitting techniques, was updated due to adding new functions (isotopic and hyperfine splittins) in VALD and including grids of NLTE calculations for energy levels of few chemical elements. SME allows to derive automatically stellar atmospheric parameters: effective temperature, surface gravity, chemical abundances, radial and rotational velocities, turbulent velocities, taking into account all the effects defining spectral line formation. SME package uses the best grids of stellar atmospheres that allows us to perform spectral analysis with the similar accuracy in wide range of stellar parameters and metallicities - from dwarfs to giants of BAFGK spectral classes.
The Gaia FGK benchmark stars. High resolution spectral library
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.
2014-06-01
Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98
Oßmann, Barbara E; Sarau, George; Schmitt, Sebastian W; Holtmannspötter, Heinrich; Christiansen, Silke H; Dicke, Wilhelm
2017-06-01
When analysing microplastics in food, due to toxicological reasons it is important to achieve clear identification of particles down to a size of at least 1 μm. One reliable, optical analytical technique allowing this is micro-Raman spectroscopy. After isolation of particles via filtration, analysis is typically performed directly on the filter surface. In order to obtain high qualitative Raman spectra, the material of the membrane filters should not show any interference in terms of background and Raman signals during spectrum acquisition. To facilitate the usage of automatic particle detection, membrane filters should also show specific optical properties. In this work, beside eight different, commercially available membrane filters, three newly designed metal-coated polycarbonate membrane filters were tested to fulfil these requirements. We found that aluminium-coated polycarbonate membrane filters had ideal characteristics as a substrate for micro-Raman spectroscopy. Its spectrum shows no or minimal interference with particle spectra, depending on the laser wavelength. Furthermore, automatic particle detection can be applied when analysing the filter surface under dark-field illumination. With this new membrane filter, analytics free of interference of microplastics down to a size of 1 μm becomes possible. Thus, an important size class of these contaminants can now be visualized and spectrally identified. Graphical abstract A newly developed aluminium coated polycarbonate membrane filter enables automatic particle detection and generation of high qualitative Raman spectra allowing identification of small microplastics.
NASA Technical Reports Server (NTRS)
Peterson, R. C.; Title, A. M.
1975-01-01
A total reduction procedure, notable for its use of a computer-controlled microdensitometer for semi-automatically tracing curved spectra, is applied to distorted high-dispersion echelle spectra recorded by an image tube. Microdensitometer specifications are presented and the FORTRAN, TRACEN and SPOTS programs are outlined. The intensity spectrum of the photographic or electrographic plate is plotted on a graphic display. The time requirements are discussed in detail.
A Method for the Automatic Detection of Insect Clutter in Doppler-Radar Returns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luke,E.; Kollias, P.; Johnson, K.
2006-06-12
The accurate detection and removal of insect clutter from millimeter wavelength cloud radar (MMCR) returns is of high importance to boundary layer cloud research (e.g., Geerts et al., 2005). When only radar Doppler moments are available, it is difficult to produce a reliable screening of insect clutter from cloud returns because their distributions overlap. Hence, screening of MMCR insect clutter has historically involved a laborious manual process of cross-referencing radar moments against measurements from other collocated instruments, such as lidar. Our study looks beyond traditional radar moments to ask whether analysis of recorded Doppler spectra can serve as the basismore » for reliable, automatic insect clutter screening. We focus on the MMCR operated by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program at its Southern Great Plains (SGP) facility in Oklahoma. Here, archiving of full Doppler spectra began in September 2003, and during the warmer months, a pronounced insect presence regularly introduces clutter into boundary layer returns.« less
Analysis and automatic identification of sleep stages using higher order spectra.
Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo
2010-12-01
Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.
NASA Astrophysics Data System (ADS)
Buntine, Wray L.; Kraft, Richard; Whitaker, Kevin; Cooper, Anita E.; Powers, W. T.; Wallace, Tim L.
1993-06-01
Data obtained in the framework of an Optical Plume Anomaly Detection (OPAD) program intended to create a rocket engine health monitor based on spectrometric detections of anomalous atomic and molecular species in the exhaust plume are analyzed. The major results include techniques for handling data noise, methods for registration of spectra to wavelength, and a simple automatic process for estimating the metallic component of a spectrum.
Zhou, Xuan; Chen, Cen; Ye, Xiaolan; Song, Fenyun; Fan, Guorong; Wu, Fuhai
2016-04-01
In this study, a method coupling turbulent flow chromatography with online solid-phase extraction and high-performance liquid chromatography with tandem mass spectrometry was developed for analyzing the lignans in Magnoliae Flos. By the online pretreatment of turbulent flow chromatography solid-phase extraction, the impurities removal and analytes concentration were automatically processed, and the lignans were separated rapidly and well. Seven lignans of Magnoliae Flos including epieudesmin, magnolin, 1-irioresinol-B-dimethyl ether, epi-magnolin, fargesin aschantin, and demethoxyaschantin were identified by comparing their retention behavior, UV spectra, and mass spectra with those of reference substances or literature data. The developed method was validated, and the good results showed that the method was not only automatic and rapid, but also accurate and reliable. The turbulent flow chromatography with online solid-phase extraction and high-performance liquid chromatography with tandem mass spectrometry method holds a high potential to become an effective method for the quality control of lignans in Magnoliae Flos and a useful tool for the analysis of other complex mixtures. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic Preocessing of Impact Ionization Mass Spectra Obtained by Cassini CDA
NASA Astrophysics Data System (ADS)
Villeneuve, M.
2015-12-01
Since Cassini's arrival at Saturn in 2004, the Comic Dust Analyzer (CDA) has recorded nearly 200,000 mass spectra of dust particles. A majority of this data has been collected in Saturn's diffuse E ring where sodium salts embedded in water ice particles indicate that many particles are in fact frozen droplets from Enceladus' subsurface ocean that have been expelled from cracks in the icy crust. So far only a small fraction of the obtained spectra have been processed because the steps in processing the spectra require human manipulation. We developed an automatic processing pipeline for CDA mass spectra which will consistently analyze this data. The preprocessing steps are to de-noise the spectra, determine and remove the baseline, calculate the correct stretch parameter, and finally to identify elements and compounds in the spectra. With the E ring constantly evolving due to embedded active moons, this data will provide valuable information about the source of the E ring, the subsurface of Saturn's ice moon Enceladus, as well as about the dynamics of the ring itself.
Automatic quality control in clinical (1)H MRSI of brain cancer.
Pedrosa de Barros, Nuno; McKinley, Richard; Knecht, Urspeter; Wiest, Roland; Slotboom, Johannes
2016-05-01
MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1)H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.
A spectroscopic approach to monitor the cut processing in pulsed laser osteotomy.
Henn, Konrad; Gubaidullin, Gail G; Bongartz, Jens; Wahrburg, Jürgen; Roth, Hubert; Kunkel, Martin
2013-01-01
During laser osteotomy surgery, plasma arises at the place of ablation. It was the aim of this study to explore whether a spectroscopic analysis of this plasma would allow identification of the type of tissue that was affected by the laser. In an experimental setup (Rofin SCx10, CO(2) Slab Laser, wavelength 10.6 μm, pulse duration 80 μs, pulse repetition rate 200 Hz, max. output in cw-mode 100 W), the plasma spectra evoked by a pulsed laser, cutting 1-day postmortem pig and cow bones, were recorded. Spectra were compared to the reference spectrum of bone via correlation analysis. Our measurements show a clear differentiation between the plasma spectra when cutting either a bone or a soft tissue. The spectral changes could be detected from one to the next spectrum within 200 ms. Continuous surveillance of plasma spectra allows us to differentiate whether bone or soft tissue is hit by the last laser pulse. With this information, it may be possible to stop the laser when cutting undesired soft tissue and to design an automatic control of the ablation process.
Component spectra extraction from terahertz measurements of unknown mixtures.
Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X
2015-10-20
The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.
FAMA: An automatic code for stellar parameter and abundance determination
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2013-10-01
Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38
The AMBRE Project: Stellar parameterisation of the ESO:FEROS archived spectra
NASA Astrophysics Data System (ADS)
Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.; Ordenovic, C.
2012-06-01
Context. The AMBRE Project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Côte d'Azur (OCA) that has been established in order to carry out the determination of stellar atmospheric parameters for the archived spectra of four ESO spectrographs. Aims: The analysis of the FEROS archived spectra for their stellar parameters (effective temperatures, surface gravities, global metallicities, alpha element to iron ratios and radial velocities) has been completed in the first phase of the AMBRE Project. From the complete ESO:FEROS archive dataset that was received, a total of 21 551 scientific spectra have been identified, covering the period 2005 to 2010. These spectra correspond to 6285 stars. Methods: The determination of the stellar parameters was carried out using the stellar parameterisation algorithm, MATISSE (MATrix Inversion for Spectral SynthEsis), which has been developed at OCA to be used in the analysis of large scale spectroscopic studies in galactic archaeology. An analysis pipeline has been constructed that integrates spectral normalisation, cleaning and radial velocity correction procedures in order that the FEROS spectra could be analysed automatically with MATISSE to obtain the stellar parameters. The synthetic grid against which the MATISSE analysis is carried out is currently constrained to parameters of FGKM stars only. Results: Stellar atmospheric parameters, effective temperature, surface gravity, metallicity and alpha element abundances, were determined for 6508 (30.2%) of the FEROS archived spectra (~3087 stars). Radial velocities were determined for 11 963 (56%) of the archived spectra. 2370 (11%) spectra could not be analysed within the pipeline due to very low signal-to-noise ratios or missing spectral orders. 12 673 spectra (58.8%) were analysed in the pipeline but their parameters were discarded based on quality criteria and error analysis determined within the automated process. The majority of these rejected spectra were found to have broad spectral features, as probed both by the direct measurement of the features and cross-correlation function breadths, indicating that they may be hot and/or fast rotating stars, which are not considered within the adopted reference synthetic spectra grid. The current configuration of the synthetic spectra grid is devoted to slow-rotating FGKM stars. Hence non-standard spectra (binaries, chemically peculiar stars etc.) that could not be identified may pollute the analysis.
PRISM software—Processing and review interface for strong-motion data
Jones, Jeanne M.; Kalkan, Erol; Stephens, Christopher D.; Ng, Peter
2017-11-28
Rapidly available and accurate ground-motion acceleration time series (seismic recordings) and derived data products are essential to quickly providing scientific and engineering analysis and advice after an earthquake. To meet this need, the U.S. Geological Survey National Strong Motion Project has developed a software package called PRISM (Processing and Review Interface for Strong-Motion data). PRISM automatically processes strong-motion acceleration records, producing compatible acceleration, velocity, and displacement time series; acceleration, velocity, and displacement response spectra; Fourier amplitude spectra; and standard earthquake-intensity measures. PRISM is intended to be used by strong-motion seismic networks, as well as by earthquake engineers and seismologists.
NASA Astrophysics Data System (ADS)
Vasilevsky, A. M.; Konoplev, G. A.; Stepanova, O. S.; Toropov, D. K.; Zagorsky, A. L.
2016-04-01
A novel direct spectrophotometric method for quantitative determination of Oxiphore® drug substance (synthetic polyhydroquinone complex) in food supplements is developed. Absorption spectra of Oxiphore® water solutions in the ultraviolet region are presented. Samples preparation procedures and mathematical methods of spectra post-analytical procession are discussed. Basic characteristics of the automatic CCD-based UV spectrophotometer and special software implementing the developed method are described. The results of the trials of the developed method and software are analyzed: the error of determination for Oxiphore® concentration in water solutions of the isolated substance and singlecomponent food supplements did not exceed 15% (average error was 7…10%).
The AMBRE project: Parameterisation of FGK-type stars from the ESO:HARPS archived spectra
NASA Astrophysics Data System (ADS)
De Pascale, M.; Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.
2014-10-01
Context. The AMBRE project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Côte d'Azur (OCA). It has been established to determine the stellar atmospheric parameters of the archived spectra of four ESO spectrographs. Aims: The analysis of the ESO:HARPS archived spectra for the determination of their atmospheric parameters (effective temperature, surface gravity, global metallicities, and abundance of α-elements over iron) is presented. The sample being analysed (AMBRE:HARPS) covers the period from 2003 to 2010 and is comprised of 126 688 scientific spectra corresponding to ~17 218 different stars. Methods: For the analysis of the AMBRE:HARPS spectral sample, the automated pipeline developed for the analysis of the AMBRE:FEROS archived spectra has been adapted to the characteristics of the HARPS spectra. Within the pipeline, the stellar parameters are determined by the MATISSE algorithm, which has been developed at OCA for the analysis of large samples of stellar spectra in the framework of galactic archaeology. In the present application, MATISSE uses the AMBRE grid of synthetic spectra, which covers FGKM-type stars for a range of gravities and metallicities. Results: We first determined the radial velocity and its associated error for the ~15% of the AMBRE:HARPS spectra, for which this velocity had not been derived by the ESO:HARPS reduction pipeline. The stellar atmospheric parameters and the associated chemical index [α/Fe] with their associated errors have then been estimated for all the spectra of the AMBRE:HARPS archived sample. Based on key quality criteria, we accepted and delivered the parameterisation of 93 116 (74% of the total sample) spectra to ESO. These spectra correspond to ~10 706 stars; each are observed between one and several hundred times. This automatic parameterisation of the AMBRE:HARPS spectra shows that the large majority of these stars are cool main-sequence dwarfs with metallicities greater than -0.5 dex (as expected, given that HARPS has been extensively used for planet searches around GK-stars).
NASA Astrophysics Data System (ADS)
Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca
2015-10-01
With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.
NASA Astrophysics Data System (ADS)
Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.
2009-12-01
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Hu, Kaifeng; Ellinger, James J.; Chylla, Roger A.; Markley, John L.
2011-01-01
Time-zero 2D 13C HSQC (HSQC0) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC0 spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero 1H-13C HSQC0 in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant time mode. Semi-automatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semi-automated gsHSQC0 with those obtained by the original manual phase-cycled HSQC0 approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture. PMID:22029275
Proteomic Prediction of Breast Cancer Risk: A Cohort Study
2007-03-01
Total 1728 1189 68.81 (c) Data processing. Data analysis was performed using in-house software (Du P , Angeletti RH. Automatic deconvolution of...isotope-resolved mass spectra using variable selection and quantized peptide mass distribution. Anal Chem., 78:3385-92, 2006; P Du, R Sudha, MB...control. Reportable Outcomes So far our publications have been on the development of algorithms for signal processing: 1. Du P , Angeletti RH
Automatic differential analysis of NMR experiments in complex samples.
Margueritte, Laure; Markov, Petar; Chiron, Lionel; Starck, Jean-Philippe; Vonthron-Sénécheau, Catherine; Bourjot, Mélanie; Delsuc, Marc-André
2018-06-01
Liquid state nuclear magnetic resonance (NMR) is a powerful tool for the analysis of complex mixtures of unknown molecules. This capacity has been used in many analytical approaches: metabolomics, identification of active compounds in natural extracts, and characterization of species, and such studies require the acquisition of many diverse NMR measurements on series of samples. Although acquisition can easily be performed automatically, the number of NMR experiments involved in these studies increases very rapidly, and this data avalanche requires to resort to automatic processing and analysis. We present here a program that allows the autonomous, unsupervised processing of a large corpus of 1D, 2D, and diffusion-ordered spectroscopy experiments from a series of samples acquired in different conditions. The program provides all the signal processing steps, as well as peak-picking and bucketing of 1D and 2D spectra, the program and its components are fully available. In an experiment mimicking the search of a bioactive species in a natural extract, we use it for the automatic detection of small amounts of artemisinin added to a series of plant extracts and for the generation of the spectral fingerprint of this molecule. This program called Plasmodesma is a novel tool that should be useful to decipher complex mixtures, particularly in the discovery of biologically active natural products from plants extracts but can also in drug discovery or metabolomics studies. Copyright © 2017 John Wiley & Sons, Ltd.
Mayerhöfer, Thomas G; Pahlow, Susanne; Hübner, Uwe; Popp, Jürgen
2018-06-25
A hybrid formalism combining elements from Kramers-Kronig based analyses and dispersion analysis was developed, which allows removing interference-based effects in the infrared spectra of layers on highly reflecting substrates. In order to enable a highly convenient application, the correction procedure is fully automatized and usually requires less than a minute with non-optimized software on a typical office PC. The formalism was tested with both synthetic and experimental spectra of poly(methyl methacrylate) on gold. The results confirmed the usefulness of the formalism: apparent peak ratios as well as the interference fringes in the original spectra were successfully corrected. Accordingly, the introduced formalism makes it possible to use inexpensive and robust highly reflecting substrates for routine infrared spectroscopic investigations of layers or films the thickness of which is limited by the imperative that reflectance absorbance must be smaller than about 1. For thicker films the formalism is still useful, but requires estimates for the optical constants.
COSMOS: Carnegie Observatories System for MultiObject Spectroscopy
NASA Astrophysics Data System (ADS)
Oemler, A.; Clardy, K.; Kelson, D.; Walth, G.; Villanueva, E.
2017-05-01
COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.
AMBULATORY BLOOD PRESSURE MONITORING: THE NEED OF 7-DAY RECORD
HALBERG, F.; KATINAS, G.; CORNÉLISSEN, G.; SCHWARTZKOPFF, O.; FIŠER, B.; SIEGELOVÁ, J.; DUŠEK, J.; JANČÍK, J.
2008-01-01
The need for systematic around-the-clock self-measurements of blood pressure (BP) and heart rate (HR), or preferably for automatic monitoring as the need arises and can be met by inexpensive tools, is illustrated in two case reports. Miniaturized unobtrusive, as yet unavailable instrumentation for the automatic measurement of BP and HR should be a high priority for both government and industry. Automatic ambulatorily functioning monitors already represent great progress, enabling us to introduce the concept of eventually continuous or, as yet, intermittent home ABPM. On BP and HR records, gliding spectra aligned with global spectra visualize the changing dynamics involved in health and disease, and can be part of an eventually automated system of therapy adjusted to the ever-present variability of BP. In the interim, with tools already available, chronomics on self- or automatic measurements can be considered, with analyses provided by the Halberg Chronobiology Center, as an alternative to “flying blind”, as an editor put it. Chronomics assessing variability has to be considered. PMID:19018289
Burnett, Andrew D; Fan, Wenhui; Upadhya, Prashanth C; Cunningham, John E; Hargreaves, Michael D; Munshi, Tasnim; Edwards, Howell G M; Linfield, Edmund H; Davies, A Giles
2009-08-01
Terahertz frequency time-domain spectroscopy has been used to analyse a wide range of samples containing cocaine hydrochloride, heroin and ecstasy--common drugs-of-abuse. We investigated real-world samples seized by law enforcement agencies, together with pure drugs-of-abuse, and pure drugs-of-abuse systematically adulterated in the laboratory to emulate real-world samples. In order to investigate the feasibility of automatic spectral recognition of such illicit materials by terahertz spectroscopy, principal component analysis was employed to cluster spectra of similar compounds.
2013-01-01
In this work, we report a method to acquire and analyze hyperspectral coherent anti-Stokes Raman scattering (CARS) microscopy images of organic materials and biological samples resulting in an unbiased quantitative chemical analysis. The method employs singular value decomposition on the square root of the CARS intensity, providing an automatic determination of the components above noise, which are retained. Complex CARS susceptibility spectra, which are linear in the chemical composition, are retrieved from the CARS intensity spectra using the causality of the susceptibility by two methods, and their performance is evaluated by comparison with Raman spectra. We use non-negative matrix factorization applied to the imaginary part and the nonresonant real part of the susceptibility with an additional concentration constraint to obtain absolute susceptibility spectra of independently varying chemical components and their absolute concentration. We demonstrate the ability of the method to provide quantitative chemical analysis on known lipid mixtures. We then show the relevance of the method by imaging lipid-rich stem-cell-derived mouse adipocytes as well as differentiated embryonic stem cells with a low density of lipids. We retrieve and visualize the most significant chemical components with spectra given by water, lipid, and proteins segmenting the image into the cell surrounding, lipid droplets, cytosol, and the nucleus, and we reveal the chemical structure of the cells, with details visualized by the projection of the chemical contrast into a few relevant channels. PMID:24099603
VizieR Online Data Catalog: Hubble Legacy Archive ACS grism data (Kuemmel+, 2011)
NASA Astrophysics Data System (ADS)
Kuemmel, M.; Rosati, P.; Fosbury, R.; Haase, J.; Hook, R. N.; Kuntschner, H.; Lombardi, M.; Micol, A.; Nilsson, K. K.; Stoehr, F.; Walsh, J. R.
2011-09-01
A public release of slitless spectra, obtained with ACS/WFC and the G800L grism, is presented. Spectra were automatically extracted in a uniform way from 153 archival fields (or "associations") distributed across the two Galactic caps, covering all observations to 2008. The ACS G800L grism provides a wavelength range of 0.55-1.00um, with a dispersion of 40Å/pixel and a resolution of ~80Å for point-like sources. The ACS G800L images and matched direct images were reduced with an automatic pipeline that handles all steps from archive retrieval, alignment and astrometric calibration, direct image combination, catalogue generation, spectral extraction and collection of metadata. The large number of extracted spectra (73,581) demanded automatic methods for quality control and an automated classification algorithm was trained on the visual inspection of several thousand spectra. The final sample of quality controlled spectra includes 47919 datasets (65% of the total number of extracted spectra) for 32149 unique objects, with a median iAB-band magnitude of 23.7, reaching 26.5 AB for the faintest objects. Each released dataset contains science-ready 1D and 2D spectra, as well as multi-band image cutouts of corresponding sources and a useful preview page summarising the direct and slitless data, astrometric and photometric parameters. This release is part of the continuing effort to enhance the content of the Hubble Legacy Archive (HLA) with highly processed data products which significantly facilitate the scientific exploitation of the Hubble data. In order to characterize the slitless spectra, emission-line flux and equivalent width sensitivity of the ACS data were compared with public ground-based spectra in the GOODS-South field. An example list of emission line galaxies with two or more identified lines is also included, covering the redshift range 0.2-4.6. Almost all redshift determinations outside of the GOODS fields are new. The scope of science projects possible with the ACS slitless release data is large, from studies of Galactic stars to searches for high redshift galaxies. (3 data files).
The Hubble Legacy Archive ACS grism data
NASA Astrophysics Data System (ADS)
Kümmel, M.; Rosati, P.; Fosbury, R.; Haase, J.; Hook, R. N.; Kuntschner, H.; Lombardi, M.; Micol, A.; Nilsson, K. K.; Stoehr, F.; Walsh, J. R.
2011-06-01
A public release of slitless spectra, obtained with ACS/WFC and the G800L grism, is presented. Spectra were automatically extracted in a uniform way from 153 archival fields (or "associations") distributed across the two Galactic caps, covering all observations to 2008. The ACS G800L grism provides a wavelength range of 0.55-1.00 μm, with a dispersion of 40 Å/pixel and a resolution of ~80 Å for point-like sources. The ACS G800L images and matched direct images were reduced with an automatic pipeline that handles all steps from archive retrieval, alignment and astrometric calibration, direct image combination, catalogue generation, spectral extraction and collection of metadata. The large number of extracted spectra (73,581) demanded automatic methods for quality control and an automated classification algorithm was trained on the visual inspection of several thousand spectra. The final sample of quality controlled spectra includes 47 919 datasets (65% of the total number of extracted spectra) for 32 149 unique objects, with a median iAB-band magnitude of 23.7, reaching 26.5 AB for the faintest objects. Each released dataset contains science-ready 1D and 2D spectra, as well as multi-band image cutouts of corresponding sources and a useful preview page summarising the direct and slitless data, astrometric and photometric parameters. This release is part of the continuing effort to enhance the content of the Hubble Legacy Archive (HLA) with highly processed data products which significantly facilitate the scientific exploitation of the Hubble data. In order to characterize the slitless spectra, emission-line flux and equivalent width sensitivity of the ACS data were compared with public ground-based spectra in the GOODS-South field. An example list of emission line galaxies with two or more identified lines is also included, covering the redshift range 0.2 - 4.6. Almost all redshift determinations outside of the GOODS fields are new. The scope of science projects possible with the ACS slitless release data is large, from studies of Galactic stars to searches for high redshift galaxies.
VACTIV: A graphical dialog based program for an automatic processing of line and band spectra
NASA Astrophysics Data System (ADS)
Zlokazov, V. B.
2013-05-01
The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum interval is 154. The maximum total number of peaks in the spectrum is not restricted. Running time: The calculation time is negligibly small compared with the time for the dialog; using ini-files the program can be partly used in a semi-dialog mode.
Clustering analysis of line indices for LAMOST spectra with AstroStat
NASA Astrophysics Data System (ADS)
Chen, Shu-Xin; Sun, Wei-Min; Yan, Qi
2018-06-01
The application of data mining in astronomical surveys, such as the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) survey, provides an effective approach to automatically analyze a large amount of complex survey data. Unsupervised clustering could help astronomers find the associations and outliers in a big data set. In this paper, we employ the k-means method to perform clustering for the line index of LAMOST spectra with the powerful software AstroStat. Implementing the line index approach for analyzing astronomical spectra is an effective way to extract spectral features for low resolution spectra, which can represent the main spectral characteristics of stars. A total of 144 340 line indices for A type stars is analyzed through calculating their intra and inter distances between pairs of stars. For intra distance, we use the definition of Mahalanobis distance to explore the degree of clustering for each class, while for outlier detection, we define a local outlier factor for each spectrum. AstroStat furnishes a set of visualization tools for illustrating the analysis results. Checking the spectra detected as outliers, we find that most of them are problematic data and only a few correspond to rare astronomical objects. We show two examples of these outliers, a spectrum with abnormal continuumand a spectrum with emission lines. Our work demonstrates that line index clustering is a good method for examining data quality and identifying rare objects.
Neumann, Steffen; Schmitt-Kopplin, Philippe
2017-01-01
Lipid identification is a major bottleneck in high-throughput lipidomics studies. However, tools for the analysis of lipid tandem MS spectra are rather limited. While the comparison against spectra in reference libraries is one of the preferred methods, these libraries are far from being complete. In order to improve identification rates, the in silico fragmentation tool MetFrag was combined with Lipid Maps and lipid-class specific classifiers which calculate probabilities for lipid class assignments. The resulting LipidFrag workflow was trained and evaluated on different commercially available lipid standard materials, measured with data dependent UPLC-Q-ToF-MS/MS acquisition. The automatic analysis was compared against manual MS/MS spectra interpretation. With the lipid class specific models, identification of the true positives was improved especially for cases where candidate lipids from different lipid classes had similar MetFrag scores by removing up to 56% of false positive results. This LipidFrag approach was then applied to MS/MS spectra of lipid extracts of the nematode Caenorhabditis elegans. Fragments explained by LipidFrag match known fragmentation pathways, e.g., neutral losses of lipid headgroups and fatty acid side chain fragments. Based on prediction models trained on standard lipid materials, high probabilities for correct annotations were achieved, which makes LipidFrag a good choice for automated lipid data analysis and reliability testing of lipid identifications. PMID:28278196
NASA Technical Reports Server (NTRS)
Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.
1971-01-01
This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.
Ryder, Alan G
2002-03-01
Eighty-five solid samples consisting of illegal narcotics diluted with several different materials were analyzed by near-infrared (785 nm excitation) Raman spectroscopy. Principal Component Analysis (PCA) was employed to classify the samples according to narcotic type. The best sample discrimination was obtained by using the first derivative of the Raman spectra. Furthermore, restricting the spectral variables for PCA to 2 or 3% of the original spectral data according to the most intense peaks in the Raman spectrum of the pure narcotic resulted in a rapid discrimination method for classifying samples according to narcotic type. This method allows for the easy discrimination between cocaine, heroin, and MDMA mixtures even when the Raman spectra are complex or very similar. This approach of restricting the spectral variables also decreases the computational time by a factor of 30 (compared to the complete spectrum), making the methodology attractive for rapid automatic classification and identification of suspect materials.
Self-organizing maps: a versatile tool for the automatic analysis of untargeted imaging datasets.
Franceschi, Pietro; Wehrens, Ron
2014-04-01
MS-based imaging approaches allow for location-specific identification of chemical components in biological samples, opening up possibilities of much more detailed understanding of biological processes and mechanisms. Data analysis, however, is challenging, mainly because of the sheer size of such datasets. This article presents a novel approach based on self-organizing maps, extending previous work in order to be able to handle the large number of variables present in high-resolution mass spectra. The key idea is to generate prototype images, representing spatial distributions of ions, rather than prototypical mass spectra. This allows for a two-stage approach, first generating typical spatial distributions and associated m/z bins, and later analyzing the interesting bins in more detail using accurate masses. The possibilities and advantages of the new approach are illustrated on an in-house dataset of apple slices. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic Recognition of Breathing Route During Sleep Using Snoring Sounds
NASA Astrophysics Data System (ADS)
Mikami, Tsuyoshi; Kojima, Yohichiro
This letter classifies snoring sounds into three breathing routes (oral, nasal, and oronasal) with discriminant analysis of the power spectra and k-nearest neighbor method. It is necessary to recognize breathing route during snoring, because oral snoring is a typical symptom of sleep apnea but we cannot know our own breathing and snoring condition during sleep. As a result, about 98.8% classification rate is obtained by using leave-one-out test for performance evaluation.
Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
Project VeSElkA: abundance analysis of chemical species in HD 41076 and HD 148330
NASA Astrophysics Data System (ADS)
Khalack, V.; Gallant, G.; Thibeault, C.
2017-10-01
A new semi-automatic approach is employed to carry out the abundance analysis of high-resolution spectra of HD 41076 and HD 148330 obtained recently with the spectropolarimetre Echelle SpectroPolarimetric Device for Observations of Stars at the Canada-France-Hawaii Telescope. This approach allows to prepare in a semi-automatic mode the input data for the modified zeeman2 code and to analyse several hundreds of line profiles in sequence during a single run. It also provides more information on abundance distribution for each chemical element at the deeper atmospheric layers. Our analysis of the Balmer profiles observed in the spectra of HD 41076 and HD 148330 has resulted in the estimates of their effective temperature, gravity, metallicity and radial velocity. The respective models of stellar atmosphere have been calculated with the code phoenix and used to carry out abundance analysis employing the modified zeeman2 code. The analysis shows a deficit of the C, N, F, Mg, Ca, Ti, V, Cu, Y, Mo, Sm and Gd, and overabundance of Cr, Mn, Fe, Co, Ni, Sr, Zr, Ba, Ce, Nd and Dy in the stellar atmosphere of HD 41076. In the atmosphere of HD 148330, the C, N and Mo appear to be underabundant, while the Ne, Na, Al, Si, P, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, Sr, Y, Zr, Ba, Ce, Pr, Nd, Sm, Eu, Gd and Dy are overabundant. We also have found signatures of vertical abundance stratification of Fe, Ti, Cr and Mn in HD 41076, and of Fe, Ti, V, Cr, Mn, Y, Zr, Ce, Nd, Sm and Gd in HD 148330.
Automation of peak-tracking analysis of stepwise perturbed NMR spectra.
Banelli, Tommaso; Vuano, Marco; Fogolari, Federico; Fusiello, Andrea; Esposito, Gennaro; Corazza, Alessandra
2017-02-01
We describe a new algorithmic approach able to automatically pick and track the NMR resonances of a large number of 2D NMR spectra acquired during a stepwise variation of a physical parameter. The method has been named Trace in Track (TINT), referring to the idea that a gaussian decomposition traces peaks within the tracks recognised through 3D mathematical morphology. It is capable of determining the evolution of the chemical shifts, intensity and linewidths of each tracked peak.The performances obtained in term of track reconstruction and correct assignment on realistic synthetic spectra were high above 90% when a noise level similar to that of experimental data were considered. TINT was applied successfully to several protein systems during a temperature ramp in isotope exchange experiments. A comparison with a state-of-the-art algorithm showed promising results for great numbers of spectra and low signal to noise ratios, when the graduality of the perturbation is appropriate. TINT can be applied to different kinds of high throughput chemical shift mapping experiments, with quasi-continuous variations, in which a quantitative automated recognition is crucial.
Analysis of helium-ion scattering with a desktop computer
NASA Astrophysics Data System (ADS)
Butler, J. W.
1986-04-01
This paper describes a program written in an enhanced BASIC language for a desktop computer, for simulating the energy spectra of high-energy helium ions scattered into two concurrent detectors (backward and glancing). The program is designed for 512-channel spectra from samples containing up to 8 elements and 55 user-defined layers. The program is intended to meet the needs of analyses in materials sciences, such as metallurgy, where more than a few elements may be present, where several elements may be near each other in the periodic table, and where relatively deep structure may be important. These conditions preclude the use of completely automatic procedures for obtaining the sample composition directly from the scattered ion spectrum. Therefore, efficient methods are needed for entering and editing large amounts of composition data, with many iterations and with much feedback of information from the computer to the user. The internal video screen is used exclusively for verbal and numeric communications between user and computer. The composition matrix is edited on screen with a two-dimension forms-fill-in text editor and with many automatic procedures, such as doubling the number of layers with appropriate interpolations and extrapolations. The control center of the program is a bank of 10 keys that initiate on-event branching of program flow. The experimental and calculated spectra, including those of individual elements if desired, are displayed on an external color monitor, with an optional inset plot of the depth concentration profiles of the elements in the sample.
NASA Astrophysics Data System (ADS)
Husser, Tim-Oliver; Kamann, Sebastian; Dreizler, Stefan; Wendt, Martin; Wulff, Nina; Bacon, Roland; Wisotzki, Lutz; Brinchmann, Jarle; Weilbacher, Peter M.; Roth, Martin M.; Monreal-Ibero, Ana
2016-04-01
Aims: We demonstrate the high multiplex advantage of crowded field 3D spectroscopy with the new integral field spectrograph MUSE by means of a spectroscopic analysis of more than 12 000 individual stars in the globular cluster NGC 6397. Methods: The stars are deblended with a point spread function fitting technique, using a photometric reference catalogue from HST as prior, including relative positions and brightnesses. This catalogue is also used for a first analysis of the extracted spectra, followed by an automatic in-depth analysis via a full-spectrum fitting method based on a large grid of PHOENIX spectra. Results: We analysed the largest sample so far available for a single globular cluster of 18 932 spectra from 12 307 stars in NGC 6397. We derived a mean radial velocity of vrad = 17.84 ± 0.07 km s-1 and a mean metallicity of [Fe/H] = -2.120 ± 0.002, with the latter seemingly varying with temperature for stars on the red giant branch (RGB). We determine Teff and [Fe/H] from the spectra, and log g from HST photometry. This is the first very comprehensive Hertzsprung-Russell diagram (HRD) for a globular cluster based on the analysis of several thousands of stellar spectra, ranging from the main sequence to the tip of the RGB. Furthermore, two interesting objects were identified; one is a post-AGB star and the other is a possible millisecond-pulsar companion. Data products are available at http://muse-vlt.eu/scienceBased on observations obtained at the Very Large Telescope (VLT) of the European Southern Observatory, Paranal, Chile (ESO Programme ID 60.A-9100(C)).
Automatic classification of spectra from the Infrared Astronomical Satellite (IRAS)
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Self, Matthew; Taylor, William; Goebel, John; Volk, Kevin; Walker, Helen
1989-01-01
A new classification of Infrared spectra collected by the Infrared Astronomical Satellite (IRAS) is presented. The spectral classes were discovered automatically by a program called Auto Class 2. This program is a method for discovering (inducing) classes from a data base, utilizing a Bayesian probability approach. These classes can be used to give insight into the patterns that occur in the particular domain, in this case, infrared astronomical spectroscopy. The classified spectra are the entire Low Resolution Spectra (LRS) Atlas of 5,425 sources. There are seventy-seven classes in this classification and these in turn were meta-classified to produce nine meta-classes. The classification is presented as spectral plots, IRAS color-color plots, galactic distribution plots and class commentaries. Cross-reference tables, listing the sources by IRAS name and by Auto Class class, are also given. These classes show some of the well known classes, such as the black-body class, and silicate emission classes, but many other classes were unsuspected, while others show important subtle differences within the well known classes.
Automated systems for the analysis of meteor spectra: The SMART Project
NASA Astrophysics Data System (ADS)
Madiedo, José M.
2017-09-01
This work analyzes a meteor spectroscopy survey called SMART (Spectroscopy of Meteoroids in the Atmosphere by means of Robotic Technologies), which is being conducted since 2006. In total, 55 spectrographs have been deployed at 10 different locations in Spain with the aim to obtain information about the chemical nature of meteoroids ablating in the atmosphere. The main improvements in the hardware and the software developed in the framework of this project are described, and some results obtained by these automatic devices are also discussed.
Open-Source Programming for Automated Generation of Graphene Raman Spectral Maps
NASA Astrophysics Data System (ADS)
Vendola, P.; Blades, M.; Pierre, W.; Jedlicka, S.; Rotkin, S. V.
Raman microscopy is a useful tool for studying the structural characteristics of graphene deposited onto substrates. However, extracting useful information from the Raman spectra requires data processing and 2D map generation. An existing home-built confocal Raman microscope was optimized for graphene samples and programmed to automatically generate Raman spectral maps across a specified area. In particular, an open source data collection scheme was generated to allow the efficient collection and analysis of the Raman spectral data for future use. NSF ECCS-1509786.
NASA Astrophysics Data System (ADS)
Weng, Shizhuang; Dong, Ronglu; Zhu, Zede; Zhang, Dongyan; Zhao, Jinling; Huang, Linsheng; Liang, Dong
2018-01-01
Conventional Surface-Enhanced Raman Spectroscopy (SERS) for fast detection of drugs in urine on the portable Raman spectrometer remains challenges because of low sensitivity and unreliable Raman signal, and spectra process with manual intervention. Here, we develop a novel detection method of drugs in urine using chemometric methods and dynamic SERS (D-SERS) with mPEG-SH coated gold nanorods (GNRs). D-SERS combined with the uniform GNRs can obtain giant enhancement, and the signal is also of high reproducibility. On the basis of the above advantages, we obtained the spectra of urine, urine with methamphetamine (MAMP), urine with 3, 4-Methylenedioxy Methamphetamine (MDMA) using D-SERS. Simultaneously, some chemometric methods were introduced for the intelligent and automatic analysis of spectra. Firstly, the spectra at the critical state were selected through using K-means. Then, the spectra were proposed by random forest (RF) with feature selection and principal component analysis (PCA) to develop the recognition model. And the identification accuracy of model were 100%, 98.7% and 96.7%, respectively. To validate the effect in practical issue further, the drug abusers'urine samples with 0.4, 3, 30 ppm MAMP were detected using D-SERS and identified by the classification model. The high recognition accuracy of > 92.0% can meet the demand of practical application. Additionally, the parameter optimization of RF classification model was simple. Compared with the general laboratory method, the detection process of urine's spectra using D-SERS only need 2 mins and 2 μL samples volume, and the identification of spectra based on chemometric methods can be finish in seconds. It is verified that the proposed approach can provide the accurate, convenient and rapid detection of drugs in urine.
Allmer, Jens; Kuhlgert, Sebastian; Hippler, Michael
2008-07-07
The amount of information stemming from proteomics experiments involving (multi dimensional) separation techniques, mass spectrometric analysis, and computational analysis is ever-increasing. Data from such an experimental workflow needs to be captured, related and analyzed. Biological experiments within this scope produce heterogenic data ranging from pictures of one or two-dimensional protein maps and spectra recorded by tandem mass spectrometry to text-based identifications made by algorithms which analyze these spectra. Additionally, peptide and corresponding protein information needs to be displayed. In order to handle the large amount of data from computational processing of mass spectrometric experiments, automatic import scripts are available and the necessity for manual input to the database has been minimized. Information is in a generic format which abstracts from specific software tools typically used in such an experimental workflow. The software is therefore capable of storing and cross analysing results from many algorithms. A novel feature and a focus of this database is to facilitate protein identification by using peptides identified from mass spectrometry and link this information directly to respective protein maps. Additionally, our application employs spectral counting for quantitative presentation of the data. All information can be linked to hot spots on images to place the results into an experimental context. A summary of identified proteins, containing all relevant information per hot spot, is automatically generated, usually upon either a change in the underlying protein models or due to newly imported identifications. The supporting information for this report can be accessed in multiple ways using the user interface provided by the application. We present a proteomics database which aims to greatly reduce evaluation time of results from mass spectrometric experiments and enhance result quality by allowing consistent data handling. Import functionality, automatic protein detection, and summary creation act together to facilitate data analysis. In addition, supporting information for these findings is readily accessible via the graphical user interface provided. The database schema and the implementation, which can easily be installed on virtually any server, can be downloaded in the form of a compressed file from our project webpage.
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Automatic classification of fluorescence and optical diffusion spectroscopy data in neuro-oncology
NASA Astrophysics Data System (ADS)
Savelieva, T. A.; Loshchenov, V. B.; Goryajnov, S. A.; Potapov, A. A.
2018-04-01
The complexity of the biological tissue spectroscopic analysis due to the overlap of biological molecules' absorption spectra, multiple scattering effect, as well as measurement geometry in vivo has caused the relevance of this work. In the neurooncology the problem of tumor boundaries delineation is especially acute and requires the development of new methods of intraoperative diagnosis. Methods of optical spectroscopy allow detecting various diagnostically significant parameters non-invasively. 5-ALA induced protoporphyrin IX is frequently used as fluorescent tumor marker in neurooncology. At the same time analysis of the concentration and the oxygenation level of haemoglobin and significant changes of light scattering in tumor tissues have a high diagnostic value. This paper presents an original method for the simultaneous registration of backward diffuse reflectance and fluorescence spectra, which allows defining all the parameters listed above simultaneously. The clinical studies involving 47 patients with intracranial glial tumors of II-IV Grades were carried out in N.N. Burdenko National Medical Research Center of Neurosurgery. To register the spectral dependences the spectroscopic system LESA- 01-BIOSPEC was used with specially developed w-shaped diagnostic fiber optic probe. The original algorithm of combined spectroscopic signal processing was developed. We have created a software and hardware, which allowed (as compared with the methods currently used in neurosurgical practice) to increase the sensitivity of intraoperative demarcation of intracranial tumors from 78% to 96%, specificity of 60% to 82%. The result of analysis of different techniques of automatic classification shows that in our case the most appropriate is the k Nearest Neighbors algorithm with cubic metrics.
Machine Learning Method for Pattern Recognition in Volcano Seismic Spectra
NASA Astrophysics Data System (ADS)
Radic, V.; Unglert, K.; Jellinek, M.
2016-12-01
Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as Self-Organizing Maps (SOM), Principal Component Analysis (PCA) and clustering methods can help to quickly and automatically identify important patterns related to impending eruptions. In this study we develop and evaluate an algorithm applied on a set of synthetic volcano seismic spectra as well as observed spectra from Kılauea Volcano, Hawai`i. Our goal is to retrieve a set of known spectral patterns that are associated with dominant phases of volcanic tremor before, during, and after periods of volcanic unrest. The algorithm is based on training a SOM on the spectra and then identifying local maxima and minima on the SOM 'topography'. The topography is derived from the first two PCA modes so that the maxima represent the SOM patterns that carry most of the variance in the spectra. Patterns identified in this way reproduce the known set of spectra. Our results show that, regardless of the level of white noise in the spectra, the algorithm can accurately reproduce the characteristic spectral patterns and their occurrence in time. The ability to rapidly classify spectra of volcano seismic data without prior knowledge of the character of the seismicity at a given volcanic system holds great potential for real time or near-real time applications, and thus ultimately for eruption forecasting.
Speeding up the screening of steroids in urine: development of a user-friendly library.
Galesio, M; López-Fdez, H; Reboiro-Jato, M; Gómez-Meire, Silvana; Glez-Peña, D; Fdez-Riverola, F; Lodeiro, Carlos; Diniz, M E; Capelo, J L
2013-12-11
This work presents a novel database search engine - MLibrary - designed to assist the user in the detection and identification of androgenic anabolic steroids (AAS) and its metabolites by matrix assisted laser desorption/ionization (MALDI) and mass spectrometry-based strategies. The detection of the AAS in the samples was accomplished by searching (i) the mass spectrometric (MS) spectra against the library developed to identify possible positives and (ii) by comparison of the tandem mass spectrometric (MS/MS) spectra produced after fragmentation of the possible positives with a complete set of spectra that have previously been assigned to the software. The urinary screening for anabolic agents plays a major role in anti-doping laboratories as they represent the most abused drug class in sports. With the help of the MLibrary software application, the use of MALDI techniques for doping control is simplified and the time for evaluation and interpretation of the results is reduced. To do so, the search engine takes as input several MALDI-TOF-MS and MALDI-TOF-MS/MS spectra. It aids the researcher in an automatic mode by identifying possible positives in a single MS analysis and then confirming their presence in tandem MS analysis by comparing the experimental tandem mass spectrometric data with the database. Furthermore, the search engine can, potentially, be further expanded to other compounds in addition to AASs. The applicability of the MLibrary tool is shown through the analysis of spiked urine samples. Copyright © 2013 Elsevier Inc. All rights reserved.
The AMBRE Project: Stellar parameterisation of the ESO:UVES archived spectra
NASA Astrophysics Data System (ADS)
Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.
2016-06-01
Context. The AMBRE Project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Côte d'Azur (OCA) that has been established to determine the stellar atmospheric parameters for the archived spectra of four ESO spectrographs. Aims: The analysis of the UVES archived spectra for their stellar parameters was completed in the third phase of the AMBRE Project. From the complete ESO:UVES archive dataset that was received covering the period 2000 to 2010, 51 921 spectra for the six standard setups were analysed. These correspond to approximately 8014 distinct targets (that comprise stellar and non-stellar objects) by radial coordinate search. Methods: The AMBRE analysis pipeline integrates spectral normalisation, cleaning and radial velocity correction procedures in order that the UVES spectra can then be analysed automatically with the stellar parameterisation algorithm MATISSE to obtain the stellar atmospheric parameters. The synthetic grid against which the MATISSE analysis is carried out is currently constrained to parameters of FGKM stars only. Results: Stellar atmospheric parameters are reported for 12 403 of the 51 921 UVES archived spectra analysed in AMBRE:UVES. This equates to ~23.9% of the sample and ~3708 stars. Effective temperature, surface gravity, metallicity, and alpha element to iron ratio abundances are provided for 10 212 spectra (~19.7%), while effective temperature at least is provided for the remaining 2191 spectra. Radial velocities are reported for 36 881 (~71.0%) of the analysed archive spectra. While parameters were determined for 32 306 (62.2%) spectra these parameters were not considered reliable (and thus not reported to ESO) for reasons such as very low S/N, too poor radial velocity determination, spectral features too broad for analysis, and technical issues from the reduction. Similarly the parameters of a further 7212 spectra (13.9%) were also not reported to ESO based on quality criteria and error analysis which were determined within the automated parameterisation process. Those tests lead us to expect that multi-component stellar systems will return high errors in radial velocity and fitting to the synthetic spectra and therefore will not have parameters reported to ESO. Typical external errors of σTeff ~ 110 dex, σlog g ~ 0.18 dex, σ[ M/H ] ~ 0.13 dex, and σ[ α/ Fe ] ~ 0.05 dex with some variation between giants and dwarfs and between setups are reported. Conclusions: UVES is used to observe an extensive collection of stellar and non-stellar objects all of which have been included in the archived dataset provided to OCA by ESO. The AMBRE analysis extracts those objects that lie within the FGKM parameter space of the AMBRE slow-rotating synthetic spectra grid. Thus by homogeneous blind analysis AMBRE has successfully extracted and parameterised the targeted FGK stars (23.9% of the analysed sample) from within the ESO:UVES archive.
[A wavelet-transform-based method for the automatic detection of late-type stars].
Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao
2005-07-01
The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.
VizieR Online Data Catalog: AMBRE project. FEROS archived spectra (Worley+, 2012)
NASA Astrophysics Data System (ADS)
Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.; Ordenovic, C.
2017-11-01
This first release concerns the FEROS data collected from 2005, October to 2009, December. The spectra were reduced by ESO with the corresponding automatic pipeline and then sent to the Observatoire de la Cote d'Azur (OCA, Nice) for ingestion into a dedicated pipeline (see paper). All FEROS spectra cover the domain 350nm-920nm at a resolution of about 48000. Before their ingestion into MATISSE, these spectra have been convolved at a lower resolution (Δλ=0.33Å), sliced and resampled (total number of pixels = 11890). (1 data file).
Quality of clinical brain tumor MR spectra judged by humans and machine learning tools.
Kyathanahally, Sreenath P; Mocioiu, Victor; Pedrosa de Barros, Nuno; Slotboom, Johannes; Wright, Alan J; Julià-Sapé, Margarida; Arús, Carles; Kreis, Roland
2018-05-01
To investigate and compare human judgment and machine learning tools for quality assessment of clinical MR spectra of brain tumors. A very large set of 2574 single voxel spectra with short and long echo time from the eTUMOUR and INTERPRET databases were used for this analysis. Original human quality ratings from these studies as well as new human guidelines were used to train different machine learning algorithms for automatic quality control (AQC) based on various feature extraction methods and classification tools. The performance was compared with variance in human judgment. AQC built using the RUSBoost classifier that combats imbalanced training data performed best. When furnished with a large range of spectral and derived features where the most crucial ones had been selected by the TreeBagger algorithm it showed better specificity (98%) in judging spectra from an independent test-set than previously published methods. Optimal performance was reached with a virtual three-class ranking system. Our results suggest that feature space should be relatively large for the case of MR tumor spectra and that three-class labels may be beneficial for AQC. The best AQC algorithm showed a performance in rejecting spectra that was comparable to that of a panel of human expert spectroscopists. Magn Reson Med 79:2500-2510, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
ORBS: A reduction software for SITELLE and SpiOMM data
NASA Astrophysics Data System (ADS)
Martin, Thomas
2014-09-01
ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).
Automatic poisson peak harvesting for high throughput protein identification.
Breen, E J; Hopwood, F G; Williams, K L; Wilkins, M R
2000-06-01
High throughput identification of proteins by peptide mass fingerprinting requires an efficient means of picking peaks from mass spectra. Here, we report the development of a peak harvester to automatically pick monoisotopic peaks from spectra generated on matrix-assisted laser desorption/ionisation time of flight (MALDI-TOF) mass spectrometers. The peak harvester uses advanced mathematical morphology and watershed algorithms to first process spectra to stick representations. Subsequently, Poisson modelling is applied to determine which peak in an isotopically resolved group represents the monoisotopic mass of a peptide. We illustrate the features of the peak harvester with mass spectra of standard peptides, digests of gel-separated bovine serum albumin, and with Escherictia coli proteins prepared by two-dimensional polyacrylamide gel electrophoresis. In all cases, the peak harvester proved effective in its ability to pick similar monoisotopic peaks as an experienced human operator, and also proved effective in the identification of monoisotopic masses in cases where isotopic distributions of peptides were overlapping. The peak harvester can be operated in an interactive mode, or can be completely automated and linked through to peptide mass fingerprinting protein identification tools to achieve high throughput automated protein identification.
Dyrlund, Thomas F; Poulsen, Ebbe T; Scavenius, Carsten; Sanggaard, Kristian W; Enghild, Jan J
2012-09-01
Data processing and analysis of proteomics data are challenging and time consuming. In this paper, we present MS Data Miner (MDM) (http://sourceforge.net/p/msdataminer), a freely available web-based software solution aimed at minimizing the time required for the analysis, validation, data comparison, and presentation of data files generated in MS software, including Mascot (Matrix Science), Mascot Distiller (Matrix Science), and ProteinPilot (AB Sciex). The program was developed to significantly decrease the time required to process large proteomic data sets for publication. This open sourced system includes a spectra validation system and an automatic screenshot generation tool for Mascot-assigned spectra. In addition, a Gene Ontology term analysis function and a tool for generating comparative Excel data reports are included. We illustrate the benefits of MDM during a proteomics study comprised of more than 200 LC-MS/MS analyses recorded on an AB Sciex TripleTOF 5600, identifying more than 3000 unique proteins and 3.5 million peptides. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
NASA Astrophysics Data System (ADS)
Kobayashi, Kiyoshi; Suzuki, Tohru S.
2018-03-01
A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.
BinMag: Widget for comparing stellar observed with theoretical spectra
NASA Astrophysics Data System (ADS)
Kochukhov, O.
2018-05-01
BinMag examines theoretical stellar spectra computed with Synth/SynthMag/Synmast/Synth3/SME spectrum synthesis codes and compare them to observations. An IDL widget program, BinMag applies radial velocity shift and broadening to the theoretical spectra to account for the effects of stellar rotation, radial-tangential macroturbulence, instrumental smearing. The code can also simulate spectra of spectroscopic binary stars by appropriate coaddition of two synthetic spectra. Additionally, BinMag can be used to measure equivalent width, fit line profile shapes with analytical functions, and to automatically determine radial velocity and broadening parameters. BinMag interfaces with the Synth3 (ascl:1212.010) and SME (ascl:1202.013) codes, allowing the user to determine chemical abundances and stellar atmospheric parameters from the observed spectra.
Pulsed Neurton Elemental On-Line Material Analyzer
Vourvopoulos, George
2002-08-20
An on-line material analyzer which utilizes pulsed neutron generation in order to determine the composition of material flowing through the apparatus. The on-line elemental material analyzer is based on a pulsed neutron generator. The elements in the material interact with the fast and thermal neutrons produced from the pulsed generator. Spectra of gamma-rays produced from fast neutrons interacting with elements of the material are analyzed and stored separately from spectra produced from thermal neutron reactions. Measurements of neutron activation takes place separately from the above reactions and at a distance from the neutron generator. A primary passageway allows the material to flow through at a constant rate of speed and operators to provide data corresponding to fast and thermal neutron reactions. A secondary passageway meters the material to allow for neutron activation analysis. The apparatus also has the capability to determine the density of the flowed material. Finally, the apparatus continually utilizes a neutron detector in order to normalize the yield of the gamma ray detectors and thereby automatically calibrates and adjusts the spectra data for fluctuations in neutron generation.
Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon
2013-05-15
Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs. (d)) with no alignment between OFF ‾ and ON ‾ . Method (c) revealed a slightly lower correlation coefficient of Rcd = 0.972 ± 0.028 compared to Rbd, that can be ascribed to small remaining subtraction artefacts in the final DIFF ‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF ‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging.
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d)) with no alignment between OFF‾ and ON‾. Method (c) revealed a slightly lower correlation coefficient of R cd =0.972±0.028 compared to R bd , that can be ascribed to small remaining subtraction artefacts in the final DIFF‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging. Copyright © 2017 Elsevier Inc. All rights reserved.
Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors
NASA Astrophysics Data System (ADS)
Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.
2014-07-01
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.
NASA Astrophysics Data System (ADS)
Zollo, Aldo
2016-04-01
RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.
Automatic alignment of individual peaks in large high-resolution spectral data sets
NASA Astrophysics Data System (ADS)
Stoyanova, Radka; Nicholls, Andrew W.; Nicholson, Jeremy K.; Lindon, John C.; Brown, Truman R.
2004-10-01
Pattern recognition techniques are effective tools for reducing the information contained in large spectral data sets to a much smaller number of significant features which can then be used to make interpretations about the chemical or biochemical system under study. Often the effectiveness of such approaches is impeded by experimental and instrument induced variations in the position, phase, and line width of the spectral peaks. Although characterizing the cause and magnitude of these fluctuations could be important in its own right (pH-induced NMR chemical shift changes, for example) in general they obscure the process of pattern discovery. One major area of application is the use of large databases of 1H NMR spectra of biofluids such as urine for investigating perturbations in metabolic profiles caused by drugs or disease, a process now termed metabonomics. Frequency shifts of individual peaks are the dominant source of such unwanted variations in this type of data. In this paper, an automatic procedure for aligning the individual peaks in the data set is described and evaluated. The proposed method will be vital for the efficient and automatic analysis of large metabonomic data sets and should also be applicable to other types of data.
NASA Astrophysics Data System (ADS)
Nestares, Oscar; Miravet, Carlos; Santamaria, Javier; Fonolla Navarro, Rafael
1999-05-01
Automatic object segmentation in highly noisy image sequences, composed by a translating object over a background having a different motion, is achieved through joint motion-texture analysis. Local motion and/or texture is characterized by the energy of the local spatio-temporal spectrum, as different textures undergoing different translational motions display distinctive features in their 3D (x,y,t) spectra. Measurements of local spectrum energy are obtained using a bank of directional 3rd order Gaussian derivative filters in a multiresolution pyramid in space- time (10 directions, 3 resolution levels). These 30 energy measurements form a feature vector describing texture-motion for every pixel in the sequence. To improve discrimination capability and reduce computational cost, we automatically select those 4 features (channels) that best discriminate object from background, under the assumptions that the object is smaller than the background and has a different velocity or texture. In this way we reject features irrelevant or dominated by noise, that could yield wrong segmentation results. This method has been successfully applied to sequences with extremely low visibility and for objects that are even invisible for the eye in absence of motion.
Rapid detection of bacterial contamination in cell or tissue cultures based on Raman spectroscopy
NASA Astrophysics Data System (ADS)
Bolwien, Carsten; Sulz, Gerd; Becker, Sebastian; Thielecke, Hagen; Mertsching, Heike; Koch, Steffen
2008-02-01
Monitoring the sterility of cell or tissue cultures is an essential task, particularly in the fields of regenerative medicine and tissue engineering when implanting cells into the human body. We present a system based on a commercially available microscope equipped with a microfluidic cell that prepares the particles found in the solution for analysis, a Raman-spectrometer attachment optimized for non-destructive, rapid recording of Raman spectra, and a data acquisition and analysis tool for identification of the particles. In contrast to conventional sterility testing in which samples are incubated over weeks, our system is able to analyze milliliters of supernatant or cell suspension within hours by filtering relevant particles and placing them on a Raman-friendly substrate in the microfluidic cell. Identification of critical particles via microscopic imaging and subsequent image analysis is carried out before micro-Raman analysis of those particles is then carried out with an excitation wavelength of 785 nm. The potential of this setup is demonstrated by results of artificial contamination of samples with a pool of bacteria, fungi, and spores: single-channel spectra of the critical particles are automatically baseline-corrected without using background data and classified via hierarchical cluster analysis, showing great promise for accurate and rapid detection and identification of contaminants.
An automatic molecular beam microwave Fourier transform spectrometer
NASA Astrophysics Data System (ADS)
Andresen, U.; Dreizler, H.; Grabow, J.-U.; Stahl, W.
1990-12-01
The general setup of an automatic MB-MWFT spectrometer for use in the 4-18 GHz range and its software details are discussed. The experimental control and data handling are performed on a personal computer using an interactive program. The parameters of the MW source and the resonator are controlled via IEEE bus and several serial interface ports. The tuning and measuring processes are automated and the efficiency is increased if unknown spectra are to be scanned. As an example, the spectrum of carbonyl sulfide has been measured automatically. The spectrometer is superior to all other kinds of rotational spectroscopic methods in both speed and unambiguity.
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
NASA Astrophysics Data System (ADS)
Unglert, K.; Radić, V.; Jellinek, A. M.
2016-06-01
Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as self-organizing maps (SOM) and principal component analysis (PCA) can help to quickly and automatically identify important patterns related to impending eruptions. For the first time, we evaluate the performance of SOM and PCA on synthetic volcano seismic spectra constructed from observations during two well-studied eruptions at Klauea Volcano, Hawai'i, that include features observed in many volcanic settings. In particular, our objective is to test which of the techniques can best retrieve a set of three spectral patterns that we used to compose a synthetic spectrogram. We find that, without a priori knowledge of the given set of patterns, neither SOM nor PCA can directly recover the spectra. We thus test hierarchical clustering, a commonly used method, to investigate whether clustering in the space of the principal components and on the SOM, respectively, can retrieve the known patterns. Our clustering method applied to the SOM fails to detect the correct number and shape of the known input spectra. In contrast, clustering of the data reconstructed by the first three PCA modes reproduces these patterns and their occurrence in time more consistently. This result suggests that PCA in combination with hierarchical clustering is a powerful practical tool for automated identification of characteristic patterns in volcano seismic spectra. Our results indicate that, in contrast to PCA, common clustering algorithms may not be ideal to group patterns on the SOM and that it is crucial to evaluate the performance of these tools on a control dataset prior to their application to real data.
NASA Astrophysics Data System (ADS)
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative zooplankton analyses are therefore proposed and open the way for further development of zooplankton community indicators of changes.
Pattern recognition and classification of vibrational spectra by artificial neural networks
NASA Astrophysics Data System (ADS)
Yang, Husheng
1999-10-01
A drawback of current open-path Fourier transform infrared (OP/FT-IR) systems is that they need a human expert to determine those compounds that may be quantified from a given spectrum. In this study, three types of artificial neural networks were used to alleviate this problem. Firstly, multi-layer feed-forward neural networks were used to automatically recognize compounds in an OP/FT-IR spectrum. Each neural network was trained to recognize one compound in the presence of up to ten interferents in an OP/FT-IR spectrum. The networks were successfully used to recognize five alcohols and two chlorinated compounds in field-measured controlled-release OP/FT-IR spectra of mixtures of these compounds. It has also been demonstrated that a neural network could correctly identify a spectrum in the presence of an interferent that was not included in the training set and could also reject interferents it has not seen before. Secondly, the possibility of using one- and two- dimensional Kohonen self-organizing maps (SOMs) to recognize similarities in low-resolution vapor-phase infrared spectra without any additional information has been investigated. Both full-range reference spectra and open-path window reference spectra were used to train the networks and the trained networks were then used to classify the reference spectra into several groups. The results showed that the SOMs obtained from the two different training sets were quite different, and it is more appropriate to use the second SOM in OP/FT-IR spectrometry. Thirdly, vapor-phase FT-IR reference spectra of five alcohols along with four baseline spectra were encoded as prototype vectors for a Hopfield network. Inclusion of the baseline spectra allowed the network to classify spectra as unknowns, when the reference spectra of these compounds were not stored as prototype vectors in the network. The network could identify each of the 5 alcohols correctly even in the presence of noise and interfering compounds. Finally, one- and two-dimensional Kohonen SOMs were also successfully used for the unsupervised differentiation of the Fourier transform Raman spectra of hardwoods from softwoods. A semi-quantitative method that is based on the Euclidean distances of the weight matrix has been developed to assist the automatic clustering of the neurons in a two-dimensional SOM.
Conditional adaptive Bayesian spectral analysis of nonstationary biomedical time series.
Bruce, Scott A; Hall, Martica H; Buysse, Daniel J; Krafty, Robert T
2018-03-01
Many studies of biomedical time series signals aim to measure the association between frequency-domain properties of time series and clinical and behavioral covariates. However, the time-varying dynamics of these associations are largely ignored due to a lack of methods that can assess the changing nature of the relationship through time. This article introduces a method for the simultaneous and automatic analysis of the association between the time-varying power spectrum and covariates, which we refer to as conditional adaptive Bayesian spectrum analysis (CABS). The procedure adaptively partitions the grid of time and covariate values into an unknown number of approximately stationary blocks and nonparametrically estimates local spectra within blocks through penalized splines. CABS is formulated in a fully Bayesian framework, in which the number and locations of partition points are random, and fit using reversible jump Markov chain Monte Carlo techniques. Estimation and inference averaged over the distribution of partitions allows for the accurate analysis of spectra with both smooth and abrupt changes. The proposed methodology is used to analyze the association between the time-varying spectrum of heart rate variability and self-reported sleep quality in a study of older adults serving as the primary caregiver for their ill spouse. © 2017, The International Biometric Society.
Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.
2017-01-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115
Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A
2017-11-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.
Measurement of the edge plasma rotation on J-TEXT tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Z. F.; Luo, J.; Wang, Z. J.
2013-07-15
A multi-channel high resolution spectrometer was developed for the measurement of the edge plasma rotation on J-TEXT tokamak. With the design of two opposite viewing directions, the poloidal and toroidal rotations can be measured simultaneously, and velocity accuracy is up to 1 km/s. The photon flux was enhanced by utilizing combined optical fiber. With this design, the time resolution reaches 3 ms. An assistant software “Spectra Assist” was developed for implementing the spectrometer control and data analysis automatically. A multi-channel monochromatic analyzer is designed to get the location of chosen ions simultaneously through the inversion analysis. Some preliminary experimental resultsmore » about influence of plasma density, different magnetohydrodynamics behaviors, and applying of biased electrode are presented.« less
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
An algorithm was developed for the continuous and automatic computation of Doppler noise concurrently at four sample rate intervals, evenly spanning three orders of magnitude. Average temporal Doppler phase fluctuation spectra will be routinely available in the DSN tracking system Mark III-77 and require little additional processing. The basic (noise) data will be extracted from the archival tracking data file (ATDF) of the tracking data management system.
NASA Astrophysics Data System (ADS)
Škoda, Petr; Palička, Andrej; Koza, Jakub; Shakurova, Ksenia
2017-06-01
The current archives of LAMOST multi-object spectrograph contain millions of fully reduced spectra, from which the automatic pipelines have produced catalogues of many parameters of individual objects, including their approximate spectral classification. This is, however, mostly based on the global shape of the whole spectrum and on integral properties of spectra in given bandpasses, namely presence and equivalent width of prominent spectral lines, while for identification of some interesting object types (e.g. Be stars or quasars) the detailed shape of only a few lines is crucial. Here the machine learning is bringing a new methodology capable of improving the reliability of classification of such objects even in boundary cases. We present results of Spark-based semi-supervised machine learning of LAMOST spectra attempting to automatically identify the single and double-peak emission of Hα line typical for Be and B[e] stars. The labelled sample was obtained from archive of 2m Perek telescope at Ondřejov observatory. A simple physical model of spectrograph resolution was used in domain adaptation to LAMOST training domain. The resulting list of candidates contains dozens of Be stars (some are likely yet unknown), but also a bunch of interesting objects resembling spectra of quasars and even blazars, as well as many instrumental artefacts. The verification of a nature of interesting candidates benefited considerably from cross-matching and visualisation in the Virtual Observatory environment.
EEG data reduction by means of autoregressive representation and discriminant analysis procedures.
Blinowska, K J; Czerwosz, L T; Drabik, W; Franaszczuk, P J; Ekiert, H
1981-06-01
A program for automatic evaluation of EEG spectra, providing considerable reduction of data, was devised. Artefacts were eliminated in two steps: first, the longer duration eye movement artefacts were removed by a fast and simple 'moving integral' methods, then occasional spikes were identified by means of a detection function defined in the formalism of the autoregressive (AR) model. The evaluation of power spectra was performed by means of an FFT and autoregressive representation, which made possible the comparison of both methods. The spectra obtained by means of the AR model had much smaller statistical fluctuations and better resolution, enabling us to follow the time changes of the EEG pattern. Another advantage of the autoregressive approach was the parametric description of the signal. This last property appeared to be essential in distinguishing the changes in the EEG pattern. In a drug study the application of the coefficients of the AR model as input parameters in the discriminant analysis, instead of arbitrary chosen frequency bands, brought a significant improvement in distinguishing the effects of the medication. The favourable properties of the AR model are connected with the fact that the above approach fulfils the maximum entropy principle. This means that the method describes in a maximally consistent way the available information and is free from additional assumptions, which is not the case for the FFT estimate.
Rapid analysis of ecstasy and related phenethylamines in seized tablets by Raman spectroscopy.
Bell, S E; Burns, D T; Dennis, A C; Speers, J S
2000-03-01
Raman spectroscopy with far-red excitation has been used to study seized, tableted samples of MDMA (N-methyl-3,4-methylenedioxyamphetamine) and related compounds (MDA, MDEA, MBDB, 2C-B and amphetamine sulfate), as well as pure standards of these drugs. We have found that by using far-red (785 nm) excitation the level of fluorescence background even in untreated seized samples is sufficiently low that there is little difficulty in obtaining good quality data with moderate 2 min data accumulation times. The spectra can be used to distinguish between even chemically-similar substances, such as the geometrical isomers MDEA and MBDB, and between different polymorphic/hydrated forms of the same drug. Moreover, these differences can be found even in directly recorded spectra of seized samples which have been bulked with other materials, giving a rapid and non-destructive method for drug identification. The spectra can be processed to give unambiguous identification of both drug and excipients (even when more than one compound has been used as the bulking agent) and the relative intensities of drug and excipient bands can be used for quantitative or at least semi-quantitative analysis. Finally, the simple nature of the measurements lends itself to automatic sample handling so that sample throughputs of 20 samples per hour can be achieved with no real difficulty.
Gabadinho, José; Beteva, Antonia; Guijarro, Matias; Rey-Bakaikoa, Vicente; Spruce, Darren; Bowler, Matthew W.; Brockhauser, Sandor; Flot, David; Gordon, Elspeth J.; Hall, David R.; Lavault, Bernard; McCarthy, Andrew A.; McCarthy, Joanne; Mitchell, Edward; Monaco, Stéphanie; Mueller-Dieckmann, Christoph; Nurizzo, Didier; Ravelli, Raimond B. G.; Thibault, Xavier; Walsh, Martin A.; Leonard, Gordon A.; McSweeney, Sean M.
2010-01-01
The design and features of a beamline control software system for macromolecular crystallography (MX) experiments developed at the European Synchrotron Radiation Facility (ESRF) are described. This system, MxCuBE, allows users to easily and simply interact with beamline hardware components and provides automated routines for common tasks in the operation of a synchrotron beamline dedicated to experiments in MX. Additional functionality is provided through intuitive interfaces that enable the assessment of the diffraction characteristics of samples, experiment planning, automatic data collection and the on-line collection and analysis of X-ray emission spectra. The software can be run in a tandem client-server mode that allows for remote control and relevant experimental parameters and results are automatically logged in a relational database, ISPyB. MxCuBE is modular, flexible and extensible and is currently deployed on eight macromolecular crystallography beamlines at the ESRF. Additionally, the software is installed at MAX-lab beamline I911-3 and at BESSY beamline BL14.1. PMID:20724792
Williams, D. Keith; Muddiman, David C.
2008-01-01
Fourier transform ion cyclotron resonance mass spectrometry has the ability to achieve unprecedented mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement. Even through the use of automatic gain control (AGC), the total ion population is not constant between spectra. Multiple linear regression calibration in conjunction with AGC is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. This ability allows for the extension of dynamic range of the instrument while allowing mean MMA values to remain less than 1 ppm. In addition, multiple linear regression calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level. PMID:17539605
NASA Technical Reports Server (NTRS)
Vincent, R. K.
1974-01-01
Four independent investigations are reported; in general these are concerned with improving and utilizing the correlation between the physical properties of natural materials as evidenced in laboratory spectra and spectral data collected by multispectral scanners. In one investigation, two theoretical models were devised that permit the calculation of spectral emittance spectra for rock and mineral surfaces of various particle sizes. The simpler of the two models can be used to qualitatively predict the effect of texture on the spectral emittance of rocks and minerals; it is also potentially useful as an aid in predicting the identification of natural atmospheric aerosol constituents. The second investigation determined, via an infrared ratio imaging technique, the best pair of infrared filters for silicate rock-type discrimination. In a third investigation, laboratory spectra of natural materials were compressed into 11-digit ratio codes for use in feature selection, in searches for false alarm candidates, and eventually for use as training sets in completely automatic data processors. In the fourth investigation, general outlines of a ratio preprocessor and an automatic recognition map processor are developed for on-board data processing in the space shuttle era.
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction
Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang
2016-01-01
Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638
Kessler, Nikolas; Walter, Frederik; Persicke, Marcus; Albaum, Stefan P; Kalinowski, Jörn; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W
2014-01-01
Adduct formation, fragmentation events and matrix effects impose special challenges to the identification and quantitation of metabolites in LC-ESI-MS datasets. An important step in compound identification is the deconvolution of mass signals. During this processing step, peaks representing adducts, fragments, and isotopologues of the same analyte are allocated to a distinct group, in order to separate peaks from coeluting compounds. From these peak groups, neutral masses and pseudo spectra are derived and used for metabolite identification via mass decomposition and database matching. Quantitation of metabolites is hampered by matrix effects and nonlinear responses in LC-ESI-MS measurements. A common approach to correct for these effects is the addition of a U-13C-labeled internal standard and the calculation of mass isotopomer ratios for each metabolite. Here we present a new web-platform for the analysis of LC-ESI-MS experiments. ALLocator covers the workflow from raw data processing to metabolite identification and mass isotopomer ratio analysis. The integrated processing pipeline for spectra deconvolution "ALLocatorSD" generates pseudo spectra and automatically identifies peaks emerging from the U-13C-labeled internal standard. Information from the latter improves mass decomposition and annotation of neutral losses. ALLocator provides an interactive and dynamic interface to explore and enhance the results in depth. Pseudo spectra of identified metabolites can be stored in user- and method-specific reference lists that can be applied on succeeding datasets. The potential of the software is exemplified in an experiment, in which abundance fold-changes of metabolites of the l-arginine biosynthesis in C. glutamicum type strain ATCC 13032 and l-arginine producing strain ATCC 21831 are compared. Furthermore, the capability for detection and annotation of uncommon large neutral losses is shown by the identification of (γ-)glutamyl dipeptides in the same strains. ALLocator is available online at: https://allocator.cebitec.uni-bielefeld.de. A login is required, but freely available.
Early-type galaxies: Automated reduction and analysis of ROSAT PSPC data
NASA Technical Reports Server (NTRS)
Mackie, G.; Fabbiano, G.; Harnden, F. R., Jr.; Kim, D.-W.; Maggio, A.; Micela, G.; Sciortino, S.; Ciliegi, P.
1996-01-01
Preliminary results of early-type galaxies that will be part of a galaxy catalog to be derived from the complete Rosat data base are presented. The stored data were reduced and analyzed by an automatic pipeline. This pipeline is based on a command language scrip. The important features of the pipeline include new data time screening in order to maximize the signal to noise ratio of faint point-like sources, source detection via a wavelet algorithm, and the identification of sources with objects from existing catalogs. The pipeline outputs include reduced images, contour maps, surface brightness profiles, spectra, color and hardness ratios.
NASA Astrophysics Data System (ADS)
Russell, John L.; Campbell, John L.; Boyd, Nicholas I.; Dias, Johnny F.
2018-02-01
The newly developed GUMAP software creates element maps from OMDAQ list mode files, displays these maps individually or collectively, and facilitates on-screen definitions of specified regions from which a PIXE spectrum can be built. These include a free-hand region defined by moving the cursor. The regional charge is entered automatically into the spectrum file in a new GUPIXWIN-compatible format, enabling a GUPIXWIN analysis of the spectrum. The code defaults to the OMDAQ dead time treatment but also facilitates two other methods for dead time correction in sample regions with count rates different from the average.
OpenMSI Arrayed Analysis Tools v2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOWEN, BENJAMIN; RUEBEL, OLIVER; DE ROND, TRISTAN
2017-02-07
Mass spectrometry imaging (MSI) enables high-resolution spatial mapping of biomolecules in samples and is a valuable tool for the analysis of tissues from plants and animals, microbial interactions, high-throughput screening, drug metabolism, and a host of other applications. This is accomplished by desorbing molecules from the surface on spatially defined locations, using a laser or ion beam. These ions are analyzed by a mass spectrometry and collected into a MSI 'image', a dataset containing unique mass spectra from the sampled spatial locations. MSI is used in a diverse and increasing number of biological applications. The OpenMSI Arrayed Analysis Tool (OMAAT)more » is a new software method that addresses the challenges of analyzing spatially defined samples in large MSI datasets, by providing support for automatic sample position optimization and ion selection.« less
Zhang, Chao; Guo, Xiaofei; Cai, Wenqian; Ma, Yue; Zhao, Xiaoyan
2015-04-01
The binding characteristics and protective capacity of cyanidin (Cy) and cyanidin-3-glucoside (C3G) to calf thymus DNA were explored for the first time. The Cy and C3G gave a bathochromic shift to the ultraviolet-visible spectra of the DNA, indicating the formation of the DNA-Cy and DNA-C3G complexes. The complexes were formed by an intercalative binding mode based on the results of the fluorescence spectra and competitive binding analysis. Meanwhile, the Cy and C3G protected the DNA from the damage induced by the hydroxyl radical. The binding capacity and protective capacity of the C3G were stronger than that of the Cy. Furthermore, the formation of the DNA-anthocyanin complexes was spontaneous when the hydrogen bond and hydrophobic force played a key role. Hence, the Cy and C3G could protect the DNA automatically from the damage induced by the hydroxyl radical. © 2015 Institute of Food Technologists®
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K
2014-01-01
Body fluids are a common and important type of forensic evidence. In particular, the identification of menstrual blood stains is often a key step during the investigation of rape cases. Here, we report on the application of near-infrared Raman microspectroscopy for differentiating menstrual blood from peripheral blood. We observed that the menstrual and peripheral blood samples have similar but distinct Raman spectra. Advanced statistical analysis of the multiple Raman spectra that were automatically (Raman mapping) acquired from the 40 dried blood stains (20 donors for each group) allowed us to build classification model with maximum (100%) sensitivity and specificity. We also demonstrated that despite certain common constituents, menstrual blood can be readily distinguished from vaginal fluid. All of the classification models were verified using cross-validation methods. The proposed method overcomes the problems associated with currently used biochemical methods, which are destructive, time consuming and expensive. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bainbridge, Matthew B.; Webb, John K.
2017-06-01
A new and automated method is presented for the analysis of high-resolution absorption spectra. Three established numerical methods are unified into one `artificial intelligence' process: a genetic algorithm (Genetic Voigt Profile FIT, gvpfit); non-linear least-squares with parameter constraints (vpfit); and Bayesian model averaging (BMA). The method has broad application but here we apply it specifically to the problem of measuring the fine structure constant at high redshift. For this we need objectivity and reproducibility. gvpfit is also motivated by the importance of obtaining a large statistical sample of measurements of Δα/α. Interactive analyses are both time consuming and complex and automation makes obtaining a large sample feasible. In contrast to previous methodologies, we use BMA to derive results using a large set of models and show that this procedure is more robust than a human picking a single preferred model since BMA avoids the systematic uncertainties associated with model choice. Numerical simulations provide stringent tests of the whole process and we show using both real and simulated spectra that the unified automated fitting procedure out-performs a human interactive analysis. The method should be invaluable in the context of future instrumentation like ESPRESSO on the VLT and indeed future ELTs. We apply the method to the zabs = 1.8389 absorber towards the zem = 2.145 quasar J110325-264515. The derived constraint of Δα/α = 3.3 ± 2.9 × 10-6 is consistent with no variation and also consistent with the tentative spatial variation reported in Webb et al. and King et al.
Kelstrup, Christian D.; Frese, Christian; Heck, Albert J. R.; Olsen, Jesper V.; Nielsen, Michael L.
2014-01-01
Unambiguous identification of tandem mass spectra is a cornerstone in mass-spectrometry-based proteomics. As the study of post-translational modifications (PTMs) by means of shotgun proteomics progresses in depth and coverage, the ability to correctly identify PTM-bearing peptides is essential, increasing the demand for advanced data interpretation. Several PTMs are known to generate unique fragment ions during tandem mass spectrometry, the so-called diagnostic ions, which unequivocally identify a given mass spectrum as related to a specific PTM. Although such ions offer tremendous analytical advantages, algorithms to decipher MS/MS spectra for the presence of diagnostic ions in an unbiased manner are currently lacking. Here, we present a systematic spectral-pattern-based approach for the discovery of diagnostic ions and new fragmentation mechanisms in shotgun proteomics datasets. The developed software tool is designed to analyze large sets of high-resolution peptide fragmentation spectra independent of the fragmentation method, instrument type, or protease employed. To benchmark the software tool, we analyzed large higher-energy collisional activation dissociation datasets of samples containing phosphorylation, ubiquitylation, SUMOylation, formylation, and lysine acetylation. Using the developed software tool, we were able to identify known diagnostic ions by comparing histograms of modified and unmodified peptide spectra. Because the investigated tandem mass spectra data were acquired with high mass accuracy, unambiguous interpretation and determination of the chemical composition for the majority of detected fragment ions was feasible. Collectively we present a freely available software tool that allows for comprehensive and automatic analysis of analogous product ions in tandem mass spectra and systematic mapping of fragmentation mechanisms related to common amino acids. PMID:24895383
Automatic Classification of Time-variable X-Ray Sources
NASA Astrophysics Data System (ADS)
Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.
2014-05-01
To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ~97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7-500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.
Automatic recognition of coronal type II radio bursts: The ARBIS 2 method and first observations
NASA Astrophysics Data System (ADS)
Lobzin, Vasili; Cairns, Iver; Robinson, Peter; Steward, Graham; Patterson, Garth
Major space weather events such as solar flares and coronal mass ejections are usually accompa-nied by solar radio bursts, which can potentially be used for real-time space weather forecasts. Type II radio bursts are produced near the local plasma frequency and its harmonic by fast electrons accelerated by a shock wave moving through the corona and solar wind with a typi-cal speed of 1000 km s-1 . The coronal bursts have dynamic spectra with frequency gradually falling with time and durations of several minutes. We present a new method developed to de-tect type II coronal radio bursts automatically and describe its implementation in an extended Automated Radio Burst Identification System (ARBIS 2). Preliminary tests of the method with spectra obtained in 2002 show that the performance of the current implementation is quite high, ˜ 80%, while the probability of false positives is reasonably low, with one false positive per 100-200 hr for high solar activity and less than one false event per 10000 hr for low solar activity periods. The first automatically detected coronal type II radio bursts are also presented. ARBIS 2 is now operational with IPS Radio and Space Services, providing email alerts and event lists internationally.
Wavelength calibration of arc spectra using intensity modelling
NASA Astrophysics Data System (ADS)
Balona, L. A.
2010-12-01
Wavelength calibration for astronomical spectra usually involves the use of different arc lamps for different resolving powers to reduce the problem of line blending. We present a technique which eliminates the necessity of different lamps. A lamp producing a very rich spectrum, normally used only at high resolving powers, can be used at the lowest resolving power as well. This is accomplished by modelling the observed arc spectrum and solving for the wavelength calibration as part of the modelling procedure. Line blending is automatically incorporated as part of the model. The method has been implemented and successfully tested on spectra taken with the Robert Stobie spectrograph of the Southern African Large Telescope.
Principal component analysis of Raman spectra for TiO2 nanoparticle characterization
NASA Astrophysics Data System (ADS)
Ilie, Alina Georgiana; Scarisoareanu, Monica; Morjan, Ion; Dutu, Elena; Badiceanu, Maria; Mihailescu, Ion
2017-09-01
The Raman spectra of anatase/rutile mixed phases of Sn doped TiO2 nanoparticles and undoped TiO2 nanoparticles, synthesised by laser pyrolysis, with nanocrystallite dimensions varying from 8 to 28 nm, was simultaneously processed with a self-written software that applies Principal Component Analysis (PCA) on the measured spectrum to verify the possibility of objective auto-characterization of nanoparticles from their vibrational modes. The photo-excited process of Raman scattering is very sensible to the material characteristics, especially in the case of nanomaterials, where more properties become relevant for the vibrational behaviour. We used PCA, a statistical procedure that performs eigenvalue decomposition of descriptive data covariance, to automatically analyse the sample's measured Raman spectrum, and to interfere the correlation between nanoparticle dimensions, tin and carbon concentration, and their Principal Component values (PCs). This type of application can allow an approximation of the crystallite size, or tin concentration, only by measuring the Raman spectrum of the sample. The study of loadings of the principal components provides information of the way the vibrational modes are affected by the nanoparticle features and the spectral area relevant for the classification.
Nixon, C; Anderson, T; Morris, L; McCavitt, A; McKinley, R; Yeager, D; McDaniel, M
1998-11-01
The intelligibility of female and male speech is equivalent under most ordinary living conditions. However, due to small differences between their acoustic speech signals, called speech spectra, one can be more or less intelligible than the other in certain situations such as high levels of noise. Anecdotal information, supported by some empirical observations, suggests that some of the high intensity noise spectra of military aircraft cockpits may degrade the intelligibility of female speech more than that of male speech. In an applied research study, the intelligibility of female and male speech was measured in several high level aircraft cockpit noise conditions experienced in military aviation. In Part I, (Nixon CW, et al. Aviat Space Environ Med 1998; 69:675-83) female speech intelligibility measured in the spectra and levels of aircraft cockpit noises and with noise-canceling microphones was lower than that of the male speech in all conditions. However, the differences were small and only those at some of the highest noise levels were significant. Although speech intelligibility of both genders was acceptable during normal cruise noises, improvements are required in most of the highest levels of noise created during maximum aircraft operating conditions. These results are discussed in a Part I technical report. This Part II report examines the intelligibility in the same aircraft cockpit noises of vocoded female and male speech and the accuracy with which female and male speech in some of the cockpit noises were understood by automatic speech recognition systems. The intelligibility of vocoded female speech was generally the same as that of vocoded male speech. No significant differences were measured between the recognition accuracy of male and female speech by the automatic speech recognition systems. The intelligibility of female and male speech was equivalent for these conditions.
Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N
2009-12-15
The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was identified for the first time and assigned a formula C(7)H(6)O(4) consistent with the structure of dihydroxyl-benzoic acids. The presence of these compounds in the structure of FA has so far been numerically suggested but never proven directly. It was concluded that application of the TMDS-algorithm opens new horizons in unfolding molecular complexity of NOM and other natural products.
The Improvement of Automated Spectral Identification Tool ASERA
NASA Astrophysics Data System (ADS)
Yuan, Hailong; zhang, Yanxia
2015-08-01
The regular survey of Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has acquired over four millions spectra of celestial objects by the summer of 2014, covering about a third of the whole sky area. More spectra will be obtained as the survey projects (eg. LAMOST, SDSS) keeps going on. To effectively make use of the massive spectral data, various advanced data analysis methods and technologies are in great requirement. ASERA, A Spectrum Eye Recognition Assistant, provides a simple convenient solution for the user to access spectra from LAMOST and SDSS, identify their types (QSO, galaxy, and various types of stars) and estimate their redshifts in an interactive graphic interface. The toolkit is at first especially designed for quasar identification. By shifting the quasar template overlaping the target spectrum interactively, one can easily find out the best broad emission line position and the redshift value. Now, besides the quasar template, various templates for different types of galaxies (early type, later type, starburst, bulge, elliptical and luminous red galaxies) and stars (O, B, A, F, G, K, M, WD, CV, Double Stars and Emission-Line-Objects) are added. We also have developed many new useful functionalities for inspecting and analyzing spectra, such as zooming, line fitting, smoothing and automatic result saving. The target information from input catalogues and data processing result from the pipeline as well as fitting parameters for various types of templates, can be presented at the same time. Several volume processing components are developed to support the cooperation with MySQL database, internet resources and SSAP services. ASERA will be a strong helper for astronomers to recognize spectra.
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG; a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a `repeat` sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, Darrell; Azevado, Stephen
1986-06-01
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
DDD: Dynamic Database for Diatomics
NASA Technical Reports Server (NTRS)
Schwenke, David
2004-01-01
We have developed as web-based database containing spectra of diatomic moiecuies. All data is computed from first principles, and if a user requests data for a molecule/ion that is not in the database, new calculations are automatically carried out on that species. Rotational, vibrational, and electronic transitions are included. Different levels of accuracy can be selected from qualitatively correct to the best calculations that can be carried out. The user can view and modify spectroscopic constants, view potential energy curves, download detailed high temperature linelists, or view synthetic spectra.
Multichannel Detection in High-Performance Liquid Chromatography.
ERIC Educational Resources Information Center
Miller, James C.; And Others
1982-01-01
A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…
Simultaneous 19F-1H medium resolution NMR spectroscopy for online reaction monitoring
NASA Astrophysics Data System (ADS)
Zientek, Nicolai; Laurain, Clément; Meyer, Klas; Kraume, Matthias; Guthausen, Gisela; Maiwald, Michael
2014-12-01
Medium resolution nuclear magnetic resonance (MR-NMR) spectroscopy is currently a fast developing field, which has an enormous potential to become an important analytical tool for reaction monitoring, in hyphenated techniques, and for systematic investigations of complex mixtures. The recent developments of innovative MR-NMR spectrometers are therefore remarkable due to their possible applications in quality control, education, and process monitoring. MR-NMR spectroscopy can beneficially be applied for fast, non-invasive, and volume integrating analyses under rough environmental conditions. Within this study, a simple 1/16″ fluorinated ethylene propylene (FEP) tube with an ID of 0.04″ (1.02 mm) was used as a flow cell in combination with a 5 mm glass Dewar tube inserted into a benchtop MR-NMR spectrometer with a 1H Larmor frequency of 43.32 MHz and 40.68 MHz for 19F. For the first time, quasi-simultaneous proton and fluorine NMR spectra were recorded with a series of alternating 19F and 1H single scan spectra along the reaction time coordinate of a homogeneously catalysed esterification model reaction containing fluorinated compounds. The results were compared to quantitative NMR spectra from a hyphenated 500 MHz online NMR instrument for validation. Automation of handling, pre-processing, and analysis of NMR data becomes increasingly important for process monitoring applications of online NMR spectroscopy and for its technical and practical acceptance. Thus, NMR spectra were automatically baseline corrected and phased using the minimum entropy method. Data analysis schemes were designed such that they are based on simple direct integration or first principle line fitting, with the aim that the analysis directly revealed molar concentrations from the spectra. Finally, the performance of 1/16″ FEP tube set-up with an ID of 1.02 mm was characterised regarding the limit of detection (LOQ (1H) = 0.335 mol L-1 and LOQ (19F) = 0.130 mol L-1 for trifluoroethanol in D2O (single scan)) and maximum quantitative flow rates up to 0.3 mL min-1. Thus, a series of single scan 19F and 1H NMR spectra acquired with this simple set-up already presents a valuable basis for quantitative reaction monitoring.
Intelligent Unmanned Monitoring of Remediated Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emile Fiesler, Ph.D.
During this Phase I project, IOS demonstrated the feasibility of combining digital signal processing and neural network analysis to analyze spectral signals from pure samples of several typical contaminants. We fabricated and tested a prototype system by automatically analyzing Raman spectral data taken in the Vadose zone at the 321 M site in the M area of DOE's Savannah River Site in South Carolina. This test demonstration proved the ability of IOS's technology to detect the target contaminants, tetrachloroethylene (PCE) and trichloroethylene (TCE), in isolation, and to detect the spectra of these contaminants in real-world noisy samples taken from amore » mixture of materials obtained from this typical remediation target site.« less
Defects and anharmonicity induced electron spectra of YBa2Cu3O7-δ superconductors
NASA Astrophysics Data System (ADS)
Singh, Anu; Indu, B. D.
2018-05-01
The effects of defects and anharmonicities on the electron density of states (EDOS) have been studied in high-temperature superconductors (HTS) adopting the many body quantum dynamical theory of electron Green's functions via a generalized Hamiltonian that includes the effects of electron-phonon interactions, anharmonicities and point impurities. The automatic emergence of pairons and temperature dependence of EDOS are appear as special feature of the theory. The results thus obtained and their numerical analysis for YBa2Cu3O7-δ superconductors clearly demonstrate that the presence of defects, anharmonicities and electron-phonon interactions modifies the behavior of EDOS over a wide range of temperature.
Automatic Assignment of Methyl-NMR Spectra of Supramolecular Machines Using Graph Theory.
Pritišanac, Iva; Degiacomi, Matteo T; Alderson, T Reid; Carneiro, Marta G; Ab, Eiso; Siegal, Gregg; Baldwin, Andrew J
2017-07-19
Methyl groups are powerful probes for the analysis of structure, dynamics and function of supramolecular assemblies, using both solution- and solid-state NMR. Widespread application of the methodology has been limited due to the challenges associated with assigning spectral resonances to specific locations within a biomolecule. Here, we present Methyl Assignment by Graph Matching (MAGMA), for the automatic assignment of methyl resonances. A graph matching protocol examines all possibilities for each resonance in order to determine an exact assignment that includes a complete description of any ambiguity. MAGMA gives 100% accuracy in confident assignments when tested against both synthetic data, and 9 cross-validated examples using both solution- and solid-state NMR data. We show that this remarkable accuracy enables a user to distinguish between alternative protein structures. In a drug discovery application on HSP90, we show the method can rapidly and efficiently distinguish between possible ligand binding modes. By providing an exact and robust solution to methyl resonance assignment, MAGMA can facilitate significantly accelerated studies of supramolecular machines using methyl-based NMR spectroscopy.
Cleaning HI Spectra Contaminated by GPS RFI
NASA Astrophysics Data System (ADS)
Sylvia, Kamin; Hallenbeck, Gregory L.; Undergraduate ALFALFA Team
2016-01-01
The NUDET systems aboard GPS satellites utilize radio waves to communicate information regarding surface nuclear events. The system tests appear in spectra as RFI (radio frequency interference) at 1381MHz, which contaminates observations of extragalactic HI (atomic hydrogen) signals at 50-150 Mpc. Test durations last roughly 20-120 seconds and can occur upwards of 30 times during a single night of observing. The disruption essentially renders the corresponding HI spectra useless.We present a method that automatically removes RFI in HI spectra caused by these tests. By capitalizing on the GPS system's short test durations and predictable frequency appearance we are able to devise a method of identifying times containing compromised data records. By reevaluating the remaining data, we are able to recover clean spectra while sacrificing little in terms of sensitivity to extragalactic signals. This method has been tested on 500+ spectra taken by the Undergraduate ALFALFA Team (UAT), in which it successfully identified and removed all sources of GPS RFI. It will also be used to eliminate RFI in the upcoming Arecibo Pisces-Perseus Supercluster Survey (APPSS).This work has been supported by NSF grant AST-1211005.
Automatic classification of time-variable X-ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, Kitty K.; Farrell, Sean; Murphy, Tara
2014-05-01
To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, andmore » other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.« less
Improving labeling efficiency in automatic quality control of MRSI data.
Pedrosa de Barros, Nuno; McKinley, Richard; Wiest, Roland; Slotboom, Johannes
2017-12-01
To improve the efficiency of the labeling task in automatic quality control of MR spectroscopy imaging data. 28'432 short and long echo time (TE) spectra (1.5 tesla; point resolved spectroscopy (PRESS); repetition time (TR)= 1,500 ms) from 18 different brain tumor patients were labeled by two experts as either accept or reject, depending on their quality. For each spectrum, 47 signal features were extracted. The data was then used to run several simulations and test an active learning approach using uncertainty sampling. The performance of the classifiers was evaluated as a function of the number of patients in the training set, number of spectra in the training set, and a parameter α used to control the level of classification uncertainty required for a new spectrum to be selected for labeling. The results showed that the proposed strategy allows reductions of up to 72.97% for short TE and 62.09% for long TE in the amount of data that needs to be labeled, without significant impact in classification accuracy. Further reductions are possible with significant but minimal impact in performance. Active learning using uncertainty sampling is an effective way to increase the labeling efficiency for training automatic quality control classifiers. Magn Reson Med 78:2399-2405, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
SpecDB: The AAVSO’s Public Repository for Spectra of Variable Stars
NASA Astrophysics Data System (ADS)
Kafka, Stella; Weaver, John; Silvis, George; Beck, Sara
2018-01-01
SpecDB is the American Association of Variable Star Observers (AAVSO) spectral database. Accessible to any astronomer with the capability to perform spectroscopy, SpecDB provides an unprecedented scientific opportunity for amateur and professional astronomers around the globe. Backed by the Variable Star Index, one of the most utilized variable star catalogs, SpecDB is expected to become one of the world leading databases of its kind. Once verified by a team of expert spectroscopists, an observer can upload spectra of variable stars target easily and efficiently. Uploaded spectra can then be searched for, previewed, and downloaded for inclusion in publications. Close community development and involvement will ensure a user-friendly and versatile database, compatible with the needs of 21st century astrophysics. Observations of 1D spectra are submitted as FITS files. All spectra are required to be preprocessed for wavelength calibration and dark subtraction; Bias and flat are strongly recommended. First time observers are required to submit a spectrum of a standard (non-variable) star to be checked for errors in technique or equipment. Regardless of user validation, FITS headers must include several value cards detailing the observation, as well as information regarding the observer, equipment, and observing site in accordance with existing AAVSO records. This enforces consistency and provides necessary details for follow up analysis. Requirements are provided to users in a comprehensive guidebook and accompanying technical manual. Upon submission, FITS headers are automatically checked for errors and any anomalies are immediately fed back to the user. Successful candidates can then submit at will, including multiple simultaneous submissions. All published observations can be searched and interactively previewed. Community involvement will be enhanced by an associated forum where users can discuss observation techniques and suggest improvements to the database.
Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette
2017-12-01
J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.
Einstein SSS+MPC observations of Seyfert type galaxies
NASA Technical Reports Server (NTRS)
Holt, S. S.; Turner, T. J.; Mushotzky, R. F.; Weaver, K.
1989-01-01
The X-ray spectra of 27 Seyfert galaxies measured with the Solid State Spectrometer (SSS) onboard the Einstein Observatory is investigated. This new investigation features the utilization of simultaneous data from the Monitor Proportional Counter (MPC) and automatic correction for systematic effects in the SSS. The new results are that the best-fit single power law indices agree with those previously reported, but that soft excesses are inferred for at least 20 percent of the measured spectra. The soft excesses are consistent with either an approximately 0.25 keV black body or Fe-L line emission.
The effect of combining two echo times in automatic brain tumor classification by MRS.
García-Gómez, Juan M; Tortajada, Salvador; Vidal, César; Julià-Sapé, Margarida; Luts, Jan; Moreno-Torres, Angel; Van Huffel, Sabine; Arús, Carles; Robles, Montserrat
2008-11-01
(1)H MRS is becoming an accurate, non-invasive technique for initial examination of brain masses. We investigated if the combination of single-voxel (1)H MRS at 1.5 T at two different (TEs), short TE (PRESS or STEAM, 20-32 ms) and long TE (PRESS, 135-136 ms), improves the classification of brain tumors over using only one echo TE. A clinically validated dataset of 50 low-grade meningiomas, 105 aggressive tumors (glioblastoma and metastasis), and 30 low-grade glial tumors (astrocytomas grade II, oligodendrogliomas and oligoastrocytomas) was used to fit predictive models based on the combination of features from short-TEs and long-TE spectra. A new approach that combines the two consecutively was used to produce a single data vector from which relevant features of the two TE spectra could be extracted by means of three algorithms: stepwise, reliefF, and principal components analysis. Least squares support vector machines and linear discriminant analysis were applied to fit the pairwise and multiclass classifiers, respectively. Significant differences in performance were found when short-TE, long-TE or both spectra combined were used as input. In our dataset, to discriminate meningiomas, the combination of the two TE acquisitions produced optimal performance. To discriminate aggressive tumors from low-grade glial tumours, the use of short-TE acquisition alone was preferable. The classifier development strategy used here lends itself to automated learning and test performance processes, which may be of use for future web-based multicentric classifier development studies. Copyright (c) 2008 John Wiley & Sons, Ltd.
A novel method for single bacteria identification by Raman spectroscopy
NASA Astrophysics Data System (ADS)
Schultz, Emmanuelle; Simon, Anne-Catherine; Strola, Samy Andrea; Perenon, Rémi; Espagnon, Isabelle; Allier, Cédric; Claustre, Patricia; Jary, Dorothée.; Dinten, Jean-Marc
2014-03-01
In this paper we present results on single bacteria rapid identification obtained with a low-cost and compact Raman spectrometer. At present, we demonstrate that a 1 minute procedure, including the localization of single bacterium, is sufficient to acquire comprehensive Raman spectrum in the range of 600 to 3300 cm-1. Localization and detection of single bacteria is performed by means of lensfree imaging over a large field of view of 24 mm2. An excitation source of 532 nm and 30 mW illuminates single bacteria to collect Raman signal into a Tornado Spectral Systems prototype spectrometer (HTVS technology). The acquisition time to record a single bacterium spectrum is as low as 10 s owing to the high light throughput of this spectrometer. The spectra processing features different steps for cosmic spikes removal, background subtraction, and gain normalization to correct the residual inducted fluorescence and substrate fluctuations. This allows obtaining a fine chemical fingerprint analysis. We have recorded a total of 1200 spectra over 7 bacterial species (E. coli, Bacillus species, S. epidermis, M. luteus, S. marcescens). The analysis of this database results in a high classification score of almost 90 %. Hence we can conclude that our setup enables automatic recognition of bacteria species among 7 different species. The speed and the sensitivity (<30 minutes for localization and spectra collection of 30 single bacteria) of our Raman spectrometer pave the way for high-throughput and non-destructive real-time bacteria identification assays. This compact and low-cost technology can benefit biomedical, clinical diagnostic and environmental applications.
NASA Astrophysics Data System (ADS)
Bhattacharjee, T.; Kumar, P.; Fillipe, L.
2018-02-01
Vibrational spectroscopy, especially FTIR and Raman, has shown enormous potential in disease diagnosis, especially in cancers. Their potential for detecting varied pathological conditions are regularly reported. However, to prove their applicability in clinics, large multi-center multi-national studies need to be undertaken; and these will result in enormous amount of data. A parallel effort to develop analytical methods, including user-friendly software that can quickly pre-process data and subject them to required multivariate analysis is warranted in order to obtain results in real time. This study reports a MATLAB based script that can automatically import data, preprocess spectra— interpolation, derivatives, normalization, and then carry out Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) of the first 10 PCs; all with a single click. The software has been verified on data obtained from cell lines, animal models, and in vivo patient datasets, and gives results comparable to Minitab 16 software. The software can be used to import variety of file extensions, asc, .txt., .xls, and many others. Options to ignore noisy data, plot all possible graphs with PCA factors 1 to 5, and save loading factors, confusion matrices and other parameters are also present. The software can provide results for a dataset of 300 spectra within 0.01 s. We believe that the software will be vital not only in clinical trials using vibrational spectroscopic data, but also to obtain rapid results when these tools get translated into clinics.
NASA Astrophysics Data System (ADS)
Steffen, S.; Otto, M.; Niewoehner, L.; Barth, M.; Bro¿żek-Mucha, Z.; Biegstraaten, J.; Horváth, R.
2007-09-01
A gunshot residue sample that was collected from an object or a suspected person is automatically searched for gunshot residue relevant particles. Particle data (such as size, morphology, position on the sample for manual relocation, etc.) as well as the corresponding X-ray spectra and images are stored. According to these data, particles are classified by the analysis-software into different groups: 'gunshot residue characteristic', 'consistent with gunshot residue' and environmental particles, respectively. Potential gunshot residue particles are manually checked and - if necessary - confirmed by the operating forensic scientist. As there are continuing developments on the ammunition market worldwide, it becomes more and more difficult to assign a detected particle to a particular ammunition brand. As well, the differentiation towards environmental particles similar to gunshot residue is getting more complex. To keep external conditions unchanged, gunshot residue particles were collected using a specially designed shooting device for the test shots revealing defined shooting distances between the weapon's muzzle and the target. The data obtained as X-ray spectra of a number of particles (3000 per ammunition brand) were reduced by Fast Fourier Transformation and subjected to a chemometric evaluation by means of regularized discriminant analysis. In addition to the scanning electron microscopy in combination with energy dispersive X-ray microanalysis results, isotope ratio measurements based on inductively coupled plasma analysis with mass-spectrometric detection were carried out to provide a supplementary feature for an even lower risk of misclassification.
Automatic Target Recognition for Hyperspectral Imagery
2012-03-01
representation, b) NDVI representation .... 13 Figure 6. Vegetation Reflectance Spectra, taken directly from (Eismann, 2011) ........... 15 Figure 7...46 Figure 22. Example NDVI Mean and Shade Spectrum Signatures ................................. 47 Figure 23. Example Average...locate vegetation within an image normalized-difference vegetation index ( NDVI ) is applied. NDVI was first introduced by Rouse et al. while monitoring
VSOP: the variable star one-shot project. I. Project presentation and first data release
NASA Astrophysics Data System (ADS)
Dall, T. H.; Foellmi, C.; Pritchard, J.; Lo Curto, G.; Allende Prieto, C.; Bruntt, H.; Amado, P. J.; Arentoft, T.; Baes, M.; Depagne, E.; Fernandez, M.; Ivanov, V.; Koesterke, L.; Monaco, L.; O'Brien, K.; Sarro, L. M.; Saviane, I.; Scharwächter, J.; Schmidtobreick, L.; Schütz, O.; Seifahrt, A.; Selman, F.; Stefanon, M.; Sterzik, M.
2007-08-01
Context: About 500 new variable stars enter the General Catalogue of Variable Stars (GCVS) every year. Most of them however lack spectroscopic observations, which remains critical for a correct assignement of the variability type and for the understanding of the object. Aims: The Variable Star One-shot Project (VSOP) is aimed at (1) providing the variability type and spectral type of all unstudied variable stars, (2) process, publish, and make the data available as automatically as possible, and (3) generate serendipitous discoveries. This first paper describes the project itself, the acquisition of the data, the dataflow, the spectroscopic analysis and the on-line availability of the fully calibrated and reduced data. We also present the results on the 221 stars observed during the first semester of the project. Methods: We used the high-resolution echelle spectrographs HARPS and FEROS in the ESO La Silla Observatory (Chile) to survey known variable stars. Once reduced by the dedicated pipelines, the radial velocities are determined from cross correlation with synthetic template spectra, and the spectral types are determined by an automatic minimum distance matching to synthetic spectra, with traditional manual spectral typing cross-checks. The variability types are determined by manually evaluating the available light curves and the spectroscopy. In the future, a new automatic classifier, currently being developed by members of the VSOP team, based on these spectroscopic data and on the photometric classifier developed for the COROT and Gaia space missions, will be used. Results: We confirm or revise spectral types of 221 variable stars from the GCVS. We identify 26 previously unknown multiple systems, among them several visual binaries with spectroscopic binary individual components. We present new individual results for the multiple systems V349 Vel and BC Gru, for the composite spectrum star V4385 Sgr, for the T Tauri star V1045 Sco, and for DM Boo which we re-classify as a BY Draconis variable. The complete data release can be accessed via the VSOP web site. Based on data obtained at the La Silla Observatory, European Southern Observatory, under program ID 077.D-0085.
G-mode analysis of the reflection spectra of 84 asteroids.
NASA Astrophysics Data System (ADS)
Birlan, M.; Barucci, M. A.; Fulchignoni, M.
1996-01-01
A revised version of the G-mode multivariate statistics (Coradini et al. 1977) has been used to analyse a sample of 84 asteroids. This sample of asteroids is described by 29 variables, namely 23 colours between 0.9 and 2.35 microns obtained from the data base collected by Bell et al. (Private communication), 5 colors between 0.3 and 0.85 microns from the ECAS survey (Zellner et al. 1985) and the revised IRAS albedo (Tedesco et al. 1992). The G-mode method allows the user to obtain an automatic classification of the asteroids in spectrally homogeneous groups. The role of the IR colours in separating the various groups is outlined, particularly with regard to the fine subdivision of S and C taxonomical types.
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
Chromospherically Active Stars in the RAdial Velocity Experiment (RAVE) Survey. I. The Catalog
NASA Astrophysics Data System (ADS)
Žerjal, M.; Zwitter, T.; Matijevič, G.; Strassmeier, K. G.; Bienaymé, O.; Bland-Hawthorn, J.; Boeche, C.; Freeman, K. C.; Grebel, E. K.; Kordopatis, G.; Munari, U.; Navarro, J. F.; Parker, Q. A.; Reid, W.; Seabroke, G.; Siviero, A.; Steinmetz, M.; Wyse, R. F. G.
2013-10-01
RAVE, the unbiased magnitude limited survey of southern sky stars, contained 456,676 medium-resolution spectra at the time of our analysis. Spectra cover the Ca II infrared triplet (IRT) range, which is a known indicator of chromospheric activity. Our previous work classified all spectra using locally linear embedding. It identified 53,347 cases with a suggested emission component in calcium lines. Here, we use a spectral subtraction technique to measure the properties of this emission. Synthetic templates are replaced by the observed spectra of non-active stars to bypass the difficult computations of non-local thermal equilibrium profiles of the line cores and stellar parameter dependence. We derive both the equivalent width of the excess emission for each calcium line on a 5 Å wide interval and their sum EWIRT for ~44,000 candidate active dwarf stars with signal-to-noise ratio >20, with no cuts on the basis of the source of their emission flux. From these, ~14,000 show a detectable chromospheric flux with at least a 2σ confidence level. Our set of active stars vastly enlarges previously known samples. Atmospheric parameters and, in some cases, radial velocities of active stars derived from automatic pipelines suffer from systematic shifts due to their shallower calcium lines. We re-estimate the effective temperature, metallicity, and radial velocities for candidate active stars. The overall distribution of activity levels shows a bimodal shape, with the first peak coinciding with non-active stars and the second with the pre-main-sequence cases. The catalog will be made publicly available with the next RAVE public data releases.
Ubiquitinated Proteome: Ready for Global?*
Shi, Yi; Xu, Ping; Qin, Jun
2011-01-01
Ubiquitin (Ub) is a small and highly conserved protein that can covalently modify protein substrates. Ubiquitination is one of the major post-translational modifications that regulate a broad spectrum of cellular functions. The advancement of mass spectrometers as well as the development of new affinity purification tools has greatly expedited proteome-wide analysis of several post-translational modifications (e.g. phosphorylation, glycosylation, and acetylation). In contrast, large-scale profiling of lysine ubiquitination remains a challenge. Most recently, new Ub affinity reagents such as Ub remnant antibody and tandem Ub binding domains have been developed, allowing for relatively large-scale detection of several hundreds of lysine ubiquitination events in human cells. Here we review different strategies for the identification of ubiquitination site and discuss several issues associated with data analysis. We suggest that careful interpretation and orthogonal confirmation of MS spectra is necessary to minimize false positive assignments by automatic searching algorithms. PMID:21339389
An interactive modular design for computerized photometry in spectrochemical analysis
NASA Technical Reports Server (NTRS)
Bair, V. L.
1980-01-01
A general functional description of totally automatic photometry of emission spectra is not available for an operating environment in which the sample compositions and analysis procedures are low-volume and non-routine. The advantages of using an interactive approach to computer control in such an operating environment are demonstrated. This approach includes modular subroutines selected at multiple-option, menu-style decision points. This style of programming is used to trace elemental determinations, including the automated reading of spectrographic plates produced by a 3.4 m Ebert mount spectrograph using a dc-arc in an argon atmosphere. The simplified control logic and modular subroutine approach facilitates innovative research and program development, yet is easily adapted to routine tasks. Operator confidence and control are increased by the built-in options including degree of automation, amount of intermediate data printed out, amount of user prompting, and multidirectional decision points.
Jiang, Zhi-Bo; Ren, Wei-Cong; Shi, Yuan-Yuan; Li, Xing-Xing; Lei, Xuan; Fan, Jia-Hui; Zhang, Cong; Gu, Ren-Jie; Wang, Li-Fei; Xie, Yun-Ying; Hong, Bin
2018-05-18
Sansanmycins (SS), one of several known uridyl peptide antibiotics (UPAs) possessing a unique chemical scaffold, showed a good inhibitory effect on the highly refractory pathogens Pseudomonas aeruginosa and Mycobacterium tuberculosis, especially on the multi-drug resistant M. tuberculosis. This study employed high performance liquid chromatography-mass spectrometry detector (HPLC-MSD) ion trap and LTQ orbitrap tandem mass spectrometry (MS/MS) to explore sansanmycin analogues manually and automatically by re-analysis of the Streptomyces sp. SS fermentation broth. The structure-based manual screening method, based on analysis of the fragmentation pathway of known UPAs and on comparisons of the MS/MS spectra with that of sansanmycin A (SS-A), resulted in identifying twenty sansanmycin analogues, including twelve new structures (1-12). Furthermore, to deeply explore sansanmycin analogues, we utilized a GNPS based molecular networking workflow to re-analyze the HPLC-MS/MS data automatically. As a result, eight more new sansanmycins (13-20) were discovered. Compound 1 was discovered to lose two amino acids of residue 1 (AA 1 ) and (2S, 3S)-N 3 -methyl-2,3-diamino butyric acid (DABA) from the N-terminus, and compounds 6, 11 and 12 were found to contain a 2',3'-dehydrated 4',5'-enamine-3'-deoxyuridyl moiety, which have not been reported before. Interestingly, three trace components with novel 5,6-dihydro-5'-aminouridyl group (16-18) were detected for the first time in the sansanmycin-producing strain. Their structures were primarily determined by detail analysis of the data from MS/MS. Compounds 8 and 10 were further confirmed by nuclear magnetic resonance (NMR) data, which proved the efficiency and accuracy of the method of HPLC-MS/MS for exploration of novel UPAs. Comparing to manual screening, the networking method can provide systematic visualization results. Manual screening and networking method may complement with each other to facilitate the mining of novel UPAs. Copyright © 2018 Elsevier B.V. All rights reserved.
Completely automated open-path FT-IR spectrometry.
Griffiths, Peter R; Shao, Limin; Leytem, April B
2009-01-01
Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention.
Automatic classification of spectral units in the Aristarchus plateau
NASA Astrophysics Data System (ADS)
Erard, S.; Le Mouelic, S.; Langevin, Y.
1999-09-01
A reduction scheme has been recently proposed for the NIR images of Clementine (Le Mouelic et al, JGR 1999). This reduction has been used to build an integrated UVvis-NIR image cube of the Aristarchus region, from which compositional and maturity variations can be studied (Pinet et al, LPSC 1999). We will present an analysis of this image cube, providing a classification in spectral types and spectral units. The image cube is processed with Gmode analysis using three different data sets: Normalized spectra provide a classification based mainly on spectral slope variations (ie. maturity and volcanic glasses). This analysis discriminates between craters plus ejecta, mare basalts, and DMD. Olivine-rich areas and Aristarchus central peak are also recognized. Continuum-removed spectra provide a classification more related to compositional variations, which correctly identifies olivine and pyroxenes-rich areas (in Aristarchus, Krieger, Schiaparelli\\ldots). A third analysis uses spectral parameters related to maturity and Fe composition (reflectance, 1 mu m band depth, and spectral slope) rather than intensities. It provides the most spatially consistent picture, but fails in detecting Vallis Schroeteri and DMDs. A supplementary unit, younger and rich in pyroxene, is found on Aristarchus south rim. In conclusion, Gmode analysis can discriminate between different spectral types already identified with more classic methods (PCA, linear mixing\\ldots). No previous assumption is made on the data structure, such as endmembers number and nature, or linear relationship between input variables. The variability of the spectral types is intrinsically accounted for, so that the level of analysis is always restricted to meaningful limits. A complete classification should integrate several analyses based on different sets of parameters. Gmode is therefore a powerful light toll to perform first look analysis of spectral imaging data. This research has been partly founded by the French Programme National de Planetologie.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn
2014-09-29
In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposedmore » method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.« less
NASA Astrophysics Data System (ADS)
Hong, Pengyu; Sun, Hui; Sha, Long; Pu, Yi; Khatri, Kshitij; Yu, Xiang; Tang, Yang; Lin, Cheng
2017-08-01
A major challenge in glycomics is the characterization of complex glycan structures that are essential for understanding their diverse roles in many biological processes. We present a novel efficient computational approach, named GlycoDeNovo, for accurate elucidation of the glycan topologies from their tandem mass spectra. Given a spectrum, GlycoDeNovo first builds an interpretation-graph specifying how to interpret each peak using preceding interpreted peaks. It then reconstructs the topologies of peaks that contribute to interpreting the precursor ion. We theoretically prove that GlycoDeNovo is highly efficient. A major innovative feature added to GlycoDeNovo is a data-driven IonClassifier which can be used to effectively rank candidate topologies. IonClassifier is automatically learned from experimental spectra of known glycans to distinguish B- and C-type ions from all other ion types. Our results showed that GlycoDeNovo is robust and accurate for topology reconstruction of glycans from their tandem mass spectra. [Figure not available: see fulltext.
An automatic detection software for differential reflection spectroscopy
NASA Astrophysics Data System (ADS)
Yuksel, Seniha Esen; Dubroca, Thierry; Hummel, Rolf E.; Gader, Paul D.
2012-06-01
Recent terrorist attacks have sprung a need for a large scale explosive detector. Our group has developed differential reflection spectroscopy which can detect explosive residue on surfaces such as parcel, cargo and luggage. In short, broad band ultra-violet and visible light is shone onto a material (such as a parcel) moving on a conveyor belt. Upon reflection off the surface, the light intensity is recorded with a spectrograph (spectrometer in combination with a CCD camera). This reflected light intensity is then subtracted and normalized with the next data point collected, resulting in differential reflection spectra in the 200-500 nm range. Explosives show spectral finger-prints at specific wavelengths, for example, the spectrum of 2,4,6, trinitrotoluene (TNT) shows an absorption edge at 420 nm. Additionally, we have developed an automated software which detects the characteristic features of explosives. One of the biggest challenges for the algorithm is to reach a practical limit of detection. In this study, we introduce our automatic detection software which is a combination of principal component analysis and support vector machines. Finally we present the sensitivity and selectivity response of our algorithm as a function of the amount of explosive detected on a given surface.
UniNovo: a universal tool for de novo peptide sequencing.
Jeong, Kyowon; Kim, Sangtae; Pevzner, Pavel A
2013-08-15
Mass spectrometry (MS) instruments and experimental protocols are rapidly advancing, but de novo peptide sequencing algorithms to analyze tandem mass (MS/MS) spectra are lagging behind. Although existing de novo sequencing tools perform well on certain types of spectra [e.g. Collision Induced Dissociation (CID) spectra of tryptic peptides], their performance often deteriorates on other types of spectra, such as Electron Transfer Dissociation (ETD), Higher-energy Collisional Dissociation (HCD) spectra or spectra of non-tryptic digests. Thus, rather than developing a new algorithm for each type of spectra, we develop a universal de novo sequencing algorithm called UniNovo that works well for all types of spectra or even for spectral pairs (e.g. CID/ETD spectral pairs). UniNovo uses an improved scoring function that captures the dependences between different ion types, where such dependencies are learned automatically using a modified offset frequency function. The performance of UniNovo is compared with PepNovo+, PEAKS and pNovo using various types of spectra. The results show that the performance of UniNovo is superior to other tools for ETD spectra and superior or comparable with others for CID and HCD spectra. UniNovo also estimates the probability that each reported reconstruction is correct, using simple statistics that are readily obtained from a small training dataset. We demonstrate that the estimation is accurate for all tested types of spectra (including CID, HCD, ETD, CID/ETD and HCD/ETD spectra of trypsin, LysC or AspN digested peptides). UniNovo is implemented in JAVA and tested on Windows, Ubuntu and OS X machines. UniNovo is available at http://proteomics.ucsd.edu/Software/UniNovo.html along with the manual.
SERS as a tool for in vitro toxicology.
Fisher, Kate M; McLeish, Jennifer A; Jamieson, Lauren E; Jiang, Jing; Hopgood, James R; McLaughlin, Stephen; Donaldson, Ken; Campbell, Colin J
2016-06-23
Measuring markers of stress such as pH and redox potential are important when studying toxicology in in vitro models because they are markers of oxidative stress, apoptosis and viability. While surface enhanced Raman spectroscopy is ideally suited to the measurement of redox potential and pH in live cells, the time-intensive nature and perceived difficulty in signal analysis and interpretation can be a barrier to its broad uptake by the biological community. In this paper we detail the development of signal processing and analysis algorithms that allow SERS spectra to be automatically processed so that the output of the processing is a pH or redox potential value. By automating signal processing we were able to carry out a comparative evaluation of the toxicology of silver and zinc oxide nanoparticles and correlate our findings with qPCR analysis. The combination of these two analytical techniques sheds light on the differences in toxicology between these two materials from the perspective of oxidative stress.
Deep learning classification in asteroseismology
NASA Astrophysics Data System (ADS)
Hon, Marc; Stello, Dennis; Yu, Jie
2017-08-01
In the power spectra of oscillating red giants, there are visually distinct features defining stars ascending the red giant branch from those that have commenced helium core burning. We train a 1D convolutional neural network by supervised learning to automatically learn these visual features from images of folded oscillation spectra. By training and testing on Kepler red giants, we achieve an accuracy of up to 99 per cent in separating helium-burning red giants from those ascending the red giant branch. The convolutional neural network additionally shows capability in accurately predicting the evolutionary states of 5379 previously unclassified Kepler red giants, by which we now have greatly increased the number of classified stars.
Arduino Due based tool to facilitate in vivo two-photon excitation microscopy.
Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele
2016-04-01
Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo.
NASA Astrophysics Data System (ADS)
Zhang, Weihua; Hoffmann, Emmy; Ungar, Kurt; Dolinar, George; Miley, Harry; Mekarski, Pawel; Schrom, Brian; Hoffman, Ian; Lawrie, Ryan; Loosz, Tom
2013-04-01
The nuclear industry emissions of the four CTBT (Comprehensive Nuclear-Test-Ban Treaty) relevant radioxenon isotopes are unavoidably detected by the IMS along with possible treaty violations. Another civil source of radioxenon emissions which contributes to the global background is radiopharmaceutical production companies. To better understand the source terms of these background emissions, a joint project between HC, ANSTO, PNNL and CRL was formed to install real-time detection systems to support 135Xe, 133Xe, 131mXe and 133mXe measurements at the ANSTO and CRL 99Mo production facility stacks as well as the CANDU (CANada Deuterium Uranium) primary coolant monitoring system at CRL. At each site, high resolution gamma spectra were collected every 15 minutes using a HPGe detector to continuously monitor a bypass feed from the stack or CANDU primary coolant system as it passed through a sampling cell. HC also conducted atmospheric monitoring for radioxenon at approximately 200 km distant from CRL. A program was written to transfer each spectrum into a text file format suitable for the automatic gamma-spectra analysis platform and then email the file to a server. Once the email was received by the server, it was automatically analysed with the gamma-spectrum software UniSampo/Shaman to perform radionuclide identification and activity calculation for a large number of gamma-spectra in a short period of time (less than 10 seconds per spectrum). The results of nuclide activity together with other spectrum parameters were saved into the Linssi database. This database contains a large amount of radionuclide information which is a valuable resource for the analysis of radionuclide distribution within the noble gas fission product emissions. The results could be useful to identify the specific mechanisms of the activity release. The isotopic signatures of the various radioxenon species can be determined as a function of release time. Comparison of 133mXe and 133Xe activity ratios showed distinct differences between the closed CANDU primary coolant system and radiopharmaceutical production releases. According to the concept proposed by Kalinowski and Pistner (2006), the relationship between different isotopic activity ratios based on three or four radioxenon isotopes was plotted in a log-log diagram for source characterisation (civil vs. nuclear test). The multiple isotopic activity ratios were distributed in three distinct areas: HC atmospheric monitoring ratios extended to far left; the CANDU primary coolant system ratios lay in the middle; and 99Mo stack monitoring ratios for ANSTO and CRL were located on the right. The closed CANDU primary coolant has the lowest logarithmic mean ratio that represents the nuclear power reactor operation. The HC atmospheric monitoring exhibited a broad range of ratios spreading over several orders of magnitude. In contrast, the ANSTO and CRL stack emissions showed the smallest range of ratios but the results indicate at least two processes involved in the 99Mo productions. Overall, most measurements were found to be shifted towards the reactor domain. The hypothesis is that this is due to an accumulation of the isotope 131mXe in the stack or atmospheric background as it has the longest half-life and extra 131mXe emissions from the decay of 131I. The contribution of older 131mXe to a fresh release shifts the ratio of 133mXe/131mXe to the left. It was also very interesting to note that there were some situations where isotopic ratios from 99Mo production emissions fell within the nuclear test domain. This is due to operational variability, such as shorter target irradiation times. Martin B. Kalinowski and Christoph Pistner, (2006), Isotopic signature of atmospheric xenon released from light water reactors, Journal of Environmental Radioactivity, 88, 215-235.
CO2 and SO2 IR Line Lists for Venus/Mars and Exo-Planet Atmosphere Studies
NASA Astrophysics Data System (ADS)
Huang, X.; Schwenke, D.; Sergey, T. A.; Lee, T. J.
2012-12-01
Atmospheric studies of both solar system planets and extra-solar planets need accurate spectra data input and analysis from planetary missions and astronomical observations. Accurate Infra-Red (IR) line lists of critical species are necessary to determine the physical conditions and compositions of atmospheres. Here we demonstrate an example of how theoretical chemistry can help in this regard. By combining the state-of-the-art ab initio theory, quantum exact rovibrational CI approach, and selected reliable high resolution experimental data, we have successfully generated the most complete and reliable IR line lists for Carbon Dioxide and Sulfur Dioxide (and their isotopologues) with accuracies of 0.01-0.02 cm-1, or ~10 MHz for microwave spectra. Agreement for observed intensities is around 90%. Our approach not only automatically fills in all the missing bands (especially those weaker, difficult bands) below the highest experiment energies, but also safely extrapolates beyond those with still reliable predictions. The reliability and accuracy of our IR line lists have been verified by the most recent experiments. The CO2 line list actually extends to 30,000 cm-1 and J>180. It works for early planets with temperature as high as 1000-2000K. The SO2 line list covers 0 - 14000 cm-1 and J>100. These line lists are expected to facilitate the atmospheric analysis and modeling of both planets (and moons) within our solar system and beyond to extra-solar planets. 32SO2 IR spectra comparison. (top) Ames-296K line list vs. recent experiment; (bottom) Ames-296K fills in the gaps of HITRAN2008 data. 12C16O2 IR Simulation at different temperatures using the latest Ames-296K IR linelist. (Unpublished work by R.S. Freedman, SETI/NASA Ames SST)
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
The use of Matlab for colour fuzzy representation of multichannel EEG short time spectra.
Bigan, C; Strungaru, R
1998-01-01
During the last years, a lot of EEG research efforts was directed to intelligent methods for automatic analysis of data from multichannel EEG recordings. However, all the applications reported were focused on specific single tasks like detection of one specific "event" in the EEG signal: spikes, sleep spindles, epileptic seizures, K complexes, alpha or other rhythms or even artefacts. The aim of this paper is to present a complex system being able to perform a representation of the dynamic changes in frequency components of each EEG channel. This representation uses colours as a powerful means to show the only one frequency range chosen from the shortest epoch of signal able to be processed with the conventional "Short Time Fast Fourier Transform" (S.T.F.F.T.) method.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
The AMBRE Project: r-process element abundances in the Milky Way thin and thick discs
NASA Astrophysics Data System (ADS)
Guiglion, Guillaume; de Laverny, Patrick; Recio-Blanco, Alejandra; Worley, C. Clare
2018-04-01
Chemical evolution of r-process elements in the Milky Way disc is still a matter of debate. We took advantage of high resolution HARPS spectra from the ESO archive in order to derive precise chemical abundances of 3 r-process elements Eu, Dy & Gd for a sample of 4 355 FGK Milky Way stars. The chemical analysis has been performed thanks to the automatic optimization pipeline GAUGUIN. Based on the [α/Fe] ratio, we chemically characterized the thin and the thick discs, and present here results of these 3 r-process element abundances in both discs. We found an unexpected Gadolinium and Dysprosium enrichment in the thick disc stars compared to Europium, while these three elements track well each other in the thin disc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rees, Brian G.
These are slides from a presentation. The identiFINDER provides information on radiation levels. It can automatically identify isotopes in its library. It can save spectra for transfer to a computer, and has a 4-8 hour battery life. The following is covered: an overview, operating modes, getting started, finder mode, search, identification mode, dose & rate, warning & alarm, options (ultra LGH), options (identifinder2), and general procedure.
NASA Astrophysics Data System (ADS)
Sakkas, Georgios; Sakellariou, Nikolaos
2018-05-01
Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.
[Using neural networks based template matching method to obtain redshifts of normal galaxies].
Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng
2005-06-01
Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scibelli, Samantha; Newberg, Heidi Jo; Carlin, Jeffrey L.
We present a census of the 12,060 spectra of blue objects ((g – r){sub 0} < –0.25) in the Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8). As part of the data release, all of the spectra were cross-correlated with 48 template spectra of stars, galaxies, and QSOs to determine the best match. We compared the blue spectra by eye to the templates assigned in SDSS DR8. 10,856 of the objects matched their assigned template, 170 could not be classified due to low signal-to-noise ratio, and 1034 were given new classifications. We identify 7458 DA white dwarfs, 1145 DBmore » white dwarfs, 273 rarer white dwarfs (including carbon, DZ, DQ, and magnetic), 294 subdwarf O stars, 648 subdwarf B stars, 679 blue horizontal branch stars, 1026 blue stragglers, 13 cataclysmic variables, 129 white dwarf-M dwarf binaries, 36 objects with spectra similar to DO white dwarfs, 179, quasi-stellar objects (QSOs), and 10 galaxies. We provide two tables of these objects, sample spectra that match the templates, figures showing all of the spectra that were grouped by eye, and diagnostic plots that show the positions, colors, apparent magnitudes, proper motions, etc., for each classification. Future surveys will be able to use templates similar to stars in each of the classes we identify to automatically classify blue stars, including rare types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scibelli, Samantha; Newberg, Heidi Jo; Carlin, Jeffrey L.
In this work, we present a census of the 12,060 spectra of blue objects (more » $$(g-r)_0<-0.25$$) in the Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8). As part of the data release, all of the spectra were cross-correlated with 48 template spectra of stars, galaxies and QSOs to determine the best match. We compared the blue spectra by eye to the templates assigned in SDSS DR8. 10,856 of the objects matched their assigned template, 170 could not be classified due to low signal-to-noise (S/N), and 1034 were given new classifications. We identify 7458 DA white dwarfs, 1145 DB white dwarfs, 273 rarer white dwarfs (including carbon, DZ, DQ, and magnetic), 294 subdwarf O stars, 648 subdwarf B stars, 679 blue horizontal branch stars, 1026 blue stragglers, 13 cataclysmic variables, 129 white dwarf - M dwarf binaries, 36 objects with spectra similar to DO white dwarfs, 179 QSOs, and 10 galaxies. We provide two tables of these objects, sample spectra that match the templates, figures showing all of the spectra that were grouped by eye, and diagnostic plots that show the positions, colors, apparent magnitudes, proper motions, etc. for each classification. In conclusion, future surveys will be able to use templates similar to stars in each of the classes we identify to classify blue stars, including rare types, automatically.« less
CENSUS OF BLUE STARS IN SDSS DR8
Scibelli, Samantha; Newberg, Heidi Jo; Carlin, Jeffrey L.; ...
2014-12-02
In this work, we present a census of the 12,060 spectra of blue objects (more » $$(g-r)_0<-0.25$$) in the Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8). As part of the data release, all of the spectra were cross-correlated with 48 template spectra of stars, galaxies and QSOs to determine the best match. We compared the blue spectra by eye to the templates assigned in SDSS DR8. 10,856 of the objects matched their assigned template, 170 could not be classified due to low signal-to-noise (S/N), and 1034 were given new classifications. We identify 7458 DA white dwarfs, 1145 DB white dwarfs, 273 rarer white dwarfs (including carbon, DZ, DQ, and magnetic), 294 subdwarf O stars, 648 subdwarf B stars, 679 blue horizontal branch stars, 1026 blue stragglers, 13 cataclysmic variables, 129 white dwarf - M dwarf binaries, 36 objects with spectra similar to DO white dwarfs, 179 QSOs, and 10 galaxies. We provide two tables of these objects, sample spectra that match the templates, figures showing all of the spectra that were grouped by eye, and diagnostic plots that show the positions, colors, apparent magnitudes, proper motions, etc. for each classification. In conclusion, future surveys will be able to use templates similar to stars in each of the classes we identify to classify blue stars, including rare types, automatically.« less
Image-based spectroscopy for environmental monitoring
NASA Astrophysics Data System (ADS)
Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind
2014-03-01
An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.
Hattori, Yusuke; Otsuka, Makoto
2017-05-30
In the pharmaceutical industry, the implementation of continuous manufacturing has been widely promoted in lieu of the traditional batch manufacturing approach. More specially, in recent years, the innovative concept of feed-forward control has been introduced in relation to process analytical technology. In the present study, we successfully developed a feed-forward control model for the tablet compression process by integrating data obtained from near-infrared (NIR) spectra and the physical properties of granules. In the pharmaceutical industry, batch manufacturing routinely allows for the preparation of granules with the desired properties through the manual control of process parameters. On the other hand, continuous manufacturing demands the automatic determination of these process parameters. Here, we proposed the development of a control model using the partial least squares regression (PLSR) method. The most significant feature of this method is the use of dataset integrating both the NIR spectra and the physical properties of the granules. Using our model, we determined that the properties of products, such as tablet weight and thickness, need to be included as independent variables in the PLSR analysis in order to predict unknown process parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
How to Compute Electron Ionization Mass Spectra from First Principles.
Bauer, Christoph Alexander; Grimme, Stefan
2016-06-02
The prediction of electron ionization (EI) mass spectra (MS) from first principles has been a major challenge for quantum chemistry (QC). The unimolecular reaction space grows rapidly with increasing molecular size. On the one hand, statistical models like Eyring's quasi-equilibrium theory and Rice-Ramsperger-Kassel-Marcus theory have provided valuable insight, and some predictions and quantitative results can be obtained from such calculations. On the other hand, molecular dynamics-based methods are able to explore automatically the energetically available regions of phase space and thus yield reaction paths in an unbiased way. We describe in this feature article the status of both methodologies in relation to mass spectrometry for small to medium sized molecules. We further present results obtained with the QCEIMS program developed in our laboratory. Our method, which incorporates stochastic and dynamic elements, has been a significant step toward the reliable routine calculation of EI mass spectra.
LAMOST OBSERVATIONS IN THE KEPLER FIELD: SPECTRAL CLASSIFICATION WITH THE MKCLASS CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R. O.; Corbally, C. J.; Cat, P. De
2016-01-15
The LAMOST-Kepler project was designed to obtain high-quality, low-resolution spectra of many of the stars in the Kepler field with the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) spectroscopic telescope. To date 101,086 spectra of 80,447 objects over the entire Kepler field have been acquired. Physical parameters, radial velocities, and rotational velocities of these stars will be reported in other papers. In this paper we present MK spectral classifications for these spectra determined with the automatic classification code MKCLASS. We discuss the quality and reliability of the spectral types and present histograms showing the frequency of the spectralmore » types in the main table organized according to luminosity class. Finally, as examples of the use of this spectral database, we compute the proportion of A-type stars that are Am stars, and identify 32 new barium dwarf candidates.« less
NASA Astrophysics Data System (ADS)
Zaitsev, F. S.; Gorelenkov, N. N.; Petrov, M. P.; Afanasyev, V. I.; Mironov, M. I.
2018-03-01
ITER plasma with parameters close to those with the inductive scenario is considered. The distribution functions of fast ions of deuterium D and tritium T are calculated while taking into account the elastic nuclear collisions with alpha particles 4He using the code FPP-3D. The D and T energy spectra detected by the neutral-particle analyzer (NPA) are determined. The plasma mixing effect on these spectra during sawtooth oscillations is studied. It is shown that the NPA makes it possible to detect sawtooth plasma oscillations in ITER and determine the percentage composition of the D‒T mixture in it both with the presence of instabilities and without them. A conclusion is drawn on the prospects of using NPA data in automatic controllers of thermonuclear fuel isotopic composition control and plasma oscillation regulation in ITER.
NASA Astrophysics Data System (ADS)
Idehara, H.; Carbon, D. F.
2004-12-01
We present two new, publicly available tools to support the examination and interpretation of spectra. SCAMP is a specialized graphical user interface for MATLAB. It allows researchers to rapidly intercompare sets of observational, theoretical, and/or laboratory spectra. Users have extensive control over the colors and placement of individual spectra, and over spectrum normalization from one spectral region to another. Spectra can be interactively assigned to user-defined groups and the groupings recalled at a later time. The user can measure/record positions and intensities of spectral features, interactively spline-fit spectra, and normalize spectra by fitted splines. User-defined wavelengths can be automatically highlighted in SCAMP plots. The user can save/print annotated graphical output suitable for a scientific notebook depicting the work at any point. The ASP is a WWW portal that provides interactive access to two spectrum data sets: a library of synthetic stellar spectra and a library of laboratory PAH spectra. The synthetic stellar spectra in the ASP are appropriate to the giant branch with an assortment of compositions. Each spectrum spans the full range from 2 to 600 microns at a variety of resolutions. The ASP is designed to allow users to quickly identify individual features at any resolution that arise from any of the included isotopic species. The user may also retrieve the depth of formation of individual features at any resolution. PAH spectra accessible through the ASP are drawn from the extensive library of spectra measured by the NASA Ames Astrochemistry Laboratory. The user may interactively choose any subset of PAHs in the data set, combine them with user-defined weights and temperatures, and view/download the resultant spectrum at any user-defined resolution. This work was funded by the NASA Advanced Supercomputing Division, NASA Ames Research Center.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
Detection of spectroscopic binaries in the Gaia-ESO Survey
NASA Astrophysics Data System (ADS)
Van der Swaelmen, M.; Merle, T.; Van Eck, S.; Jorissen, A.
2017-12-01
The Gaia-ESO survey (GES) is a ground-based spectroscopic survey, complementing the Gaia mission, in order to obtain high accuracy radial velocities and chemical abundances for 10^5 stars. Thanks to the numerous spectra collected by the GES, the detection of spectroscopic multiple system candidates (SBn, n ≥ 2) is one of the science case that can be tackled. We developed at IAA (Institut d'Astronomie et d'Astrophysique) a novative automatic method to detect multiple components from the cross-correlation function (CCF) of spectra and applied it to the CCFs provided by the GES. Since the bulk of the Milky Way field targets has been observed in both HR10 and HR21 GIRAFFE settings, we are also able to compare the efficiency of our SB detection tool depending on the wavelength range. In particular, we show that HR21 leads to a less efficient detection compared to HR10. The presence of strong and/or saturated lines (Ca II triplet, Mg I line, Paschen lines) in the wavelength domain covered by HR21 hampers the computation of CCFs, which tend to be broadened compared to their HR10 counterpart. The main drawback is that the minimal detectable radial velocity difference is ˜ \\SI{60}km/s for HR21 while it is ˜ \\SI{25}km/s for HR10. A careful design of CCF masks (especially masking Ca triplet lines) can substantially improve the detectability rate of HR21. Since HR21 spectra are quite similar to the one produced by the RVS spectrograph of the Gaia mission, analysis of RVS spectra in the context of spectroscpic binaries can take adavantage of the lessons learned from the GES to maximize the detection rate.
Etzion, Y; Linker, R; Cogan, U; Shmulevich, I
2004-09-01
This study investigates the potential use of attenuated total reflectance spectroscopy in the mid-infrared range for determining protein concentration in raw cow milk. The determination of protein concentration is based on the characteristic absorbance of milk proteins, which includes 2 absorbance bands in the 1500 to 1700 cm(-1) range, known as the amide I and amide II bands, and absorbance in the 1060 to 1100 cm(-1) range, which is associated with phosphate groups covalently bound to casein proteins. To minimize the influence of the strong water band (centered around 1640 cm(-1)) that overlaps with the amide I and amide II bands, an optimized automatic procedure for accurate water subtraction was applied. Following water subtraction, the spectra were analyzed by 3 methods, namely simple band integration, partial least squares (PLS) and neural networks. For the neural network models, the spectra were first decomposed by principal component analysis (PCA), and the neural network inputs were the spectra principal components scores. In addition, the concentrations of 2 constituents expected to interact with the protein (i.e., fat and lactose) were also used as inputs. These approaches were tested with 235 spectra of standardized raw milk samples, corresponding to 26 protein concentrations in the 2.47 to 3.90% (weight per volume) range. The simple integration method led to very poor results, whereas PLS resulted in prediction errors of about 0.22% protein. The neural network approach led to prediction errors of 0.20% protein when based on PCA scores only, and 0.08% protein when lactose and fat concentrations were also included in the model. These results indicate the potential usefulness of Fourier transform infrared/attenuated total reflectance spectroscopy for rapid, possibly online, determination of protein concentration in raw milk.
First tests of a multi-wavelength mini-DIAL system for the automatic detection of greenhouse gases
NASA Astrophysics Data System (ADS)
Parracino, S.; Gelfusa, M.; Lungaroni, M.; Murari, A.; Peluso, E.; Ciparisse, J. F.; Malizia, A.; Rossi, R.; Ventura, P.; Gaudio, P.
2017-10-01
Considering the increase of atmospheric pollution levels in our cities, due to emissions from vehicles and domestic heating, and the growing threat of terrorism, it is necessary to develop instrumentation and gather know-how for the automatic detection and measurement of dangerous substances as quickly and far away as possible. The Multi- Wavelength DIAL, an extension of the conventional DIAL technique, is one of the most powerful remote sensing methods for the identification of multiple substances and seems to be a promising solution compared to existing alternatives. In this paper, first in-field tests of a smart and fully automated Multi-Wavelength mini-DIAL will be presented and discussed in details. The recently developed system, based on a long-wavelength infrared (IR-C) CO2 laser source, has the potential of giving an early warning, whenever something strange is found in the atmosphere, followed by identification and simultaneous concentration measurements of many chemical species, ranging from the most important Greenhouse Gases (GHG) to other harmful Volatile Organic Compounds (VOCs). Preliminary studies, regarding the fingerprint of the investigated substances, have been carried out by cross-referencing database of infrared (IR) spectra, obtained using in-cell measurements, and typical Mixing Ratios in the examined region, extrapolated from the literature. First experiments in atmosphere have been performed into a suburban and moderately-busy area of Rome. Moreover, to optimize the automatic identification of the harmful species to be recognized on the basis of in cell measurements of the absorption coefficient spectra, an advanced multivariate statistical method for classification has been developed and tested.
Specdata: Automated Analysis Software for Broadband Spectra
NASA Astrophysics Data System (ADS)
Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.
2017-06-01
With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.
Arduino Due based tool to facilitate in vivo two-photon excitation microscopy
Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele
2016-01-01
Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo. PMID:27446677
The VIRUS data reduction pipeline
NASA Astrophysics Data System (ADS)
Goessl, Claus A.; Drory, Niv; Relke, Helena; Gebhardt, Karl; Grupp, Frank; Hill, Gary; Hopp, Ulrich; Köhler, Ralf; MacQueen, Phillip
2006-06-01
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ~ 35.000 spectra per exposure (~5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.
speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification.
Beirnaert, Charlie; Meysman, Pieter; Vu, Trung Nghia; Hermans, Nina; Apers, Sandra; Pieters, Luc; Covaci, Adrian; Laukens, Kris
2018-03-01
Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq).
speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification
Pieters, Luc; Covaci, Adrian
2018-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq). PMID:29494588
Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development
NASA Astrophysics Data System (ADS)
Aspinall, Michael D.; Jones, Ashley R.
2018-01-01
Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.
WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering.
Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin
2012-04-01
Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on (15)N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa.
WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering
Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin
2012-01-01
Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784
Peak picking NMR spectral data using non-negative matrix factorization.
Tikole, Suhas; Jaravine, Victor; Rogov, Vladimir; Dötsch, Volker; Güntert, Peter
2014-02-11
Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap.
NASA Astrophysics Data System (ADS)
Kesseli, Aurora Y.; West, Andrew A.; Veyette, Mark; Harrison, Brandon; Feldman, Dan; Bochanski, John J.
2017-06-01
We present a library of empirical stellar spectra created using spectra from the Sloan Digital Sky Survey’s Baryon Oscillation Spectroscopic Survey. The templates cover spectral types O5 through L3, are binned by metallicity from -2.0 dex through +1.0 dex, and are separated into main-sequence (dwarf) stars and giant stars. With recently developed M dwarf metallicity indicators, we are able to extend the metallicity bins down through the spectral subtype M8, making this the first empirical library with this degree of temperature and metallicity coverage. The wavelength coverage for the templates is from 3650 to 10200 Å at a resolution of better than R ˜ 2000. Using the templates, we identify trends in color space with metallicity and surface gravity, which will be useful for analyzing large data sets from upcoming missions like the Large Synoptic Survey Telescope. Along with the templates, we are releasing a code for automatically (and/or visually) identifying the spectral type and metallicity of a star.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesseli, Aurora Y.; West, Andrew A.; Veyette, Mark
We present a library of empirical stellar spectra created using spectra from the Sloan Digital Sky Survey’s Baryon Oscillation Spectroscopic Survey. The templates cover spectral types O5 through L3, are binned by metallicity from −2.0 dex through +1.0 dex, and are separated into main-sequence (dwarf) stars and giant stars. With recently developed M dwarf metallicity indicators, we are able to extend the metallicity bins down through the spectral subtype M8, making this the first empirical library with this degree of temperature and metallicity coverage. The wavelength coverage for the templates is from 3650 to 10200 Å at a resolution ofmore » better than R ∼ 2000. Using the templates, we identify trends in color space with metallicity and surface gravity, which will be useful for analyzing large data sets from upcoming missions like the Large Synoptic Survey Telescope. Along with the templates, we are releasing a code for automatically (and/or visually) identifying the spectral type and metallicity of a star.« less
Tutorial for the structure elucidation of small molecules by means of the LSD software.
Nuzillard, Jean-Marc; Plainchont, Bertrand
2018-06-01
Automatic structure elucidation of small molecules by means of the "logic for structure elucidation" (LSD) software is introduced in the context of the automatic exploitation of chemical shift correlation data and with minimal input from chemical shift values. The first step in solving a structural problem by means of LSD is the extraction of pertinent data from the 1D and 2D spectra. This operation requires the labeling of the resonances and of their correlations; its reliability highly depends on the quality of the spectra. The combination of COSY, HSQC, and HMBC spectra results in proximity relationships between nonhydrogen atoms that are associated in order to build the possible solutions of a problem. A simple molecule, camphor, serves as an example for the writing of an LSD input file and to show how solution structures are obtained. An input file for LSD must contain a nonambiguous description of each atom, or atom status, which includes the chemical element symbol, the hybridization state, the number of bound hydrogen atoms and the formal electric charge. In case of atom status ambiguity, the pyLSD program performs clarification by systematically generating the status of the atoms. PyLSD also proposes the use of the nmrshiftdb algorithm in order to rank the solutions of a problem according to the quality of the fit between the experimental carbon-13 chemical shifts, and the ones predicted from the proposed structures. To conclude, some hints toward future uses and developments of computer-assisted structure elucidation by LSD are proposed. Copyright © 2017 John Wiley & Sons, Ltd.
Review of LOGEX. Main Report and Appendixes A-I
1975-05-23
been developed on an RCA Spectra 70 machine located at the Army Logistics Management Center, Fort Lee, Virginia. This was undoubtedly an outstanding...Control Number ADP - Automatic Data Processing ACT - Active Duty for Training ALMC - US Army Logistics Management Center AMO - Ammunition AR - Amy...Directorate CPT McClellan, LOGEX Directorate CPT Weaver, LOGEX Directorate United States Army Logistics Management Center Mr. Loper Mr. Ross United States
NASA Astrophysics Data System (ADS)
Han, Bin; Lob, Silvia; Sablier, Michel
2018-06-01
In this study, we report the use of pyrolysis-GCxGC/MS profiles for an optimized treatment of data issued from pyrolysis-GC/MS combined with the automatic deconvolution software Automated Mass Spectral Deconvolution and Identification System (AMDIS). The method was illustrated by the characterization of marker compounds of East Asian handmade papers through the examination of pyrolysis-GCxGC/MS data to get information which was used for manually identifying low concentrated and co-eluting compounds in 1D GC/MS data. The results showed that the merits of a higher separation power for co-eluting compounds and a better sensitivity for low concentration compounds offered by a GCxGC system can be used effectively for AMDIS 1D GC/MS data treatment: (i) the compound distribution in pyrolysis-GCxGC/MS profiles can be used as "peak finder" for manual check of low concentration and co-eluting compound identification in 1D GC/MS data, and (ii) pyrolysis-GCxGC/MS profiles can provide better quality mass spectra with observed higher match factors in the AMDIS automatic match process. The combination of 2D profile with AMDIS was shown to contribute efficiently to a better characterization of compound profiles in the chromatograms obtained by 1D analysis in focusing on the mass spectral identification. [Figure not available: see fulltext.
Update on Automated Classification of Interplanetary Dust Particles
NASA Technical Reports Server (NTRS)
Maroger, I.; Lasue, J.; Zolensky, M.
2018-01-01
Every year, the Earth accretes about 40,000 tons of extraterrestrial material less than 1 mm in size on its surface. These dust particles originate from active comets, from impacts between asteroids and may also be coming from interstellar space for the very small particles. Since 1981, NASA Jonhson Space Center (JSC) has been systematically collecting the dust from Earth's strastosphere by airborne collectors and gathered them into "Cosmic Dust Catalogs". In those catalogs, a preliminary analysis of the dust particles based on SEM images, some geological characteristics and X-ray energy-dispersive spectrometry (EDS) composition is compiled. Based on those properties, the IDPs are classified into four main groups: C (Cosmic), TCN (Natural Terrestrial Contaminant), TCA (Artificial Terrestrial Contaminant) and AOS (Aluminium Oxide Sphere). Nevertheless, 20% of those particles remain ambiguously classified. Lasue et al. presented a methodology to help automatically classify the particles published in the catalog 15 based on their EDS spectra and nonlinear multivariate projections (as shown in Fig. 1). This work allowed to relabel 155 particles out of the 467 particles in catalog 15 and reclassify some contaminants as potential cosmic dusts. Further analyses of three such particles indicated their probable cosmic origin. The current work aims to bring complementary information to the automatic classification of IDPs to improve identification criteria.
Autonomous Metabolomics for Rapid Metabolite Identification in Global Profiling
Benton, H. Paul; Ivanisevic, Julijana; Mahieu, Nathaniel G.; ...
2014-12-12
An autonomous metabolomic workflow combining mass spectrometry analysis with tandem mass spectrometry data acquisition was designed to allow for simultaneous data processing and metabolite characterization. Although previously tandem mass spectrometry data have been generated on the fly, the experiments described herein combine this technology with the bioinformatic resources of XCMS and METLIN. We can analyze large profiling datasets and simultaneously obtain structural identifications, as a result of this unique integration. Furthermore, validation of the workflow on bacterial samples allowed the profiling on the order of a thousand metabolite features with simultaneous tandem mass spectra data acquisition. The tandem mass spectrometrymore » data acquisition enabled automatic search and matching against the METLIN tandem mass spectrometry database, shortening the current workflow from days to hours. Overall, the autonomous approach to untargeted metabolomics provides an efficient means of metabolomic profiling, and will ultimately allow the more rapid integration of comparative analyses, metabolite identification, and data analysis at a systems biology level.« less
Keller, Andrew; Bader, Samuel L.; Shteynberg, David; Hood, Leroy; Moritz, Robert L.
2015-01-01
Proteomics by mass spectrometry technology is widely used for identifying and quantifying peptides and proteins. The breadth and sensitivity of peptide detection have been advanced by the advent of data-independent acquisition mass spectrometry. Analysis of such data, however, is challenging due to the complexity of fragment ion spectra that have contributions from multiple co-eluting precursor ions. We present SWATHProphet software that identifies and quantifies peptide fragment ion traces in data-independent acquisition data, provides accurate probabilities to ensure results are correct, and automatically detects and removes contributions to quantitation originating from interfering precursor ions. Integration in the widely used open source Trans-Proteomic Pipeline facilitates subsequent analyses such as combining results of multiple data sets together for improved discrimination using iProphet and inferring sample proteins using ProteinProphet. This novel development should greatly help make data-independent acquisition mass spectrometry accessible to large numbers of users. PMID:25713123
NASA Astrophysics Data System (ADS)
Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.
2016-12-01
Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
Automatic NMR-Based Identification of Chemical Reaction Types in Mixtures of Co-Occurring Reactions
Latino, Diogo A. R. S.; Aires-de-Sousa, João
2014-01-01
The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the 1H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the 1H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of the molecules in the mixtures. PMID:24551112
Automatic NMR-based identification of chemical reaction types in mixtures of co-occurring reactions.
Latino, Diogo A R S; Aires-de-Sousa, João
2014-01-01
The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the (1)H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the (1)H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of the molecules in the mixtures.
Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C.; Cao, Xing-Jun; Bhanu, Natarajan V.; Wang, Xiaoshi; Sidoli, Simone; Liu, Shichong; Garcia, Benjamin A.
2015-01-01
Histone post-translational modifications contribute to chromatin function through their chemical properties which influence chromatin structure and their ability to recruit chromatin interacting proteins. Nanoflow liquid chromatography coupled with high resolution tandem mass spectrometry (nanoLC-MS/MS) has emerged as the most suitable technology for global histone modification analysis because of the high sensitivity and the high mass accuracy of this approach that provides confident identification. However, analysis of histones with this method is even more challenging because of the large number and variety of isobaric histone peptides and the high dynamic range of histone peptide abundances. Here, we introduce EpiProfile, a software tool that discriminates isobaric histone peptides using the distinguishing fragment ions in their tandem mass spectra and extracts the chromatographic area under the curve using previous knowledge about peptide retention time. The accuracy of EpiProfile was evaluated by analysis of mixtures containing different ratios of synthetic histone peptides. In addition to label-free quantification of histone peptides, EpiProfile is flexible and can quantify different types of isotopically labeled histone peptides. EpiProfile is unique in generating layouts (i.e. relative retention time) of histone peptides when compared with manual quantification of the data and other programs (such as Skyline), filling the need of an automatic and freely available tool to quantify labeled and non-labeled modified histone peptides. In summary, EpiProfile is a valuable nanoflow liquid chromatography coupled with high resolution tandem mass spectrometry-based quantification tool for histone peptides, which can also be adapted to analyze nonhistone protein samples. PMID:25805797
A LabVIEW-Based Virtual Instrument System for Laser-Induced Fluorescence Spectroscopy.
Wu, Qijun; Wang, Lufei; Zu, Lily
2011-01-01
We report the design and operation of a Virtual Instrument (VI) system based on LabVIEW 2009 for laser-induced fluorescence experiments. This system achieves synchronous control of equipment and acquisition of real-time fluorescence data communicating with a single computer via GPIB, USB, RS232, and parallel ports. The reported VI system can also accomplish data display, saving, and analysis, and printing the results. The VI system performs sequences of operations automatically, and this system has been successfully applied to obtain the excitation and dispersion spectra of α-methylnaphthalene. The reported VI system opens up new possibilities for researchers and increases the efficiency and precision of experiments. The design and operation of the VI system are described in detail in this paper, and the advantages that this system can provide are highlighted.
A LabVIEW-Based Virtual Instrument System for Laser-Induced Fluorescence Spectroscopy
Wu, Qijun; Wang, Lufei; Zu, Lily
2011-01-01
We report the design and operation of a Virtual Instrument (VI) system based on LabVIEW 2009 for laser-induced fluorescence experiments. This system achieves synchronous control of equipment and acquisition of real-time fluorescence data communicating with a single computer via GPIB, USB, RS232, and parallel ports. The reported VI system can also accomplish data display, saving, and analysis, and printing the results. The VI system performs sequences of operations automatically, and this system has been successfully applied to obtain the excitation and dispersion spectra of α-methylnaphthalene. The reported VI system opens up new possibilities for researchers and increases the efficiency and precision of experiments. The design and operation of the VI system are described in detail in this paper, and the advantages that this system can provide are highlighted. PMID:22013388
ASERA: A spectrum eye recognition assistant for quasar spectra
NASA Astrophysics Data System (ADS)
Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng
2013-11-01
Spectral type recognition is an important and fundamental step of large sky survey projects in the data reduction for further scientific research, like parameter measurement and statistic work. It tends out to be a huge job to manually inspect the low quality spectra produced from the massive spectroscopic survey, where the automatic pipeline may not provide confident type classification results. In order to improve the efficiency and effectiveness of spectral classification, we develop a semi-automated toolkit named ASERA, ASpectrum Eye Recognition Assistant. The main purpose of ASERA is to help the user in quasar spectral recognition and redshift measurement. Furthermore it can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). It is an interactive software allowing the user to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. It is an efficient and user-friendly toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope). The toolkit is available in two modes: a Java standalone application and a Java applet. ASERA has a few functions, such as wavelength and flux scale setting, zoom in and out, redshift estimation, spectral line identification, which helps user to improve the spectral classification accuracy especially for low quality spectra and reduce the labor of eyeball check. The function and performance of this tool is displayed through the recognition of several quasar spectra and a late type stellar spectrum from the LAMOST Pilot survey. Its future expansion capabilities are discussed.
Spreadsheet Toolkit for Ulysses Hi-Scale Measurements of Interplanetary Ions and Electrons
NASA Astrophysics Data System (ADS)
Reza, J. Z.; Lanzerotti, L. J.; Denker, C.; Patterson, D.; Amstrong, T. P.
2004-05-01
Throughout the entire Ulysses out-of-the-ecliptic solar polar mission, the Heliosphere Instrument for Spectra, Composition, and Anisotropy at Low Energies (HI-SCALE) has collected measurements of interplanetary ions and electrons. Time-series of electron and ion fluxes obtained since 1990 have been carefully calibrated and will be stored in a data management system, which will be publicly accessible via the WWW. The goal of the Virtual Solar Observatory (VSO) is to provide data uniformly and efficiently to a diverse user community. However, data dissemination can only be a first step, which has to be followed by a suite of data analysis tools that are tailored towards a diverse user community in science, technology, and education. The widespread use and familiarity of spreadsheets, which are available at low cost or open source for many operating systems, make them an interesting tool to investigate for the analysis of HI-SCALE data. The data are written in comma separated variable (CSV) format, which is commonly used in spreadsheet programs. CSV files can simply be linked as external data to spreadsheet templates, which in turn can be used to generate tables and figures of basic statistical properties and frequency distributions, temporal evolution of electron and ion spectra, comparisons of various energy channels, automatic detection of solar events, solar cycle variations, and space weather. Exploring spreadsheet-assisted data analysis in the context of information technology research, data base information search and retrieval, and data visualization potentially impacts other VSO components, where diverse user communities are targeted. Finally, this presentation is the result of an undergraduate research project, which will allow us to evaluate the performance of user-based spreadsheet analysis "benchmarked" at the undergraduate skill level.
Sikirzhytski, Vitali; Sikirzhytskaya, Aliaksandra; Lednev, Igor K
2012-10-10
Conventional confirmatory biochemical tests used in the forensic analysis of body fluid traces found at a crime scene are destructive and not universal. Recently, we reported on the application of near-infrared (NIR) Raman microspectroscopy for non-destructive confirmatory identification of pure blood, saliva, semen, vaginal fluid and sweat. Here we expand the method to include dry mixtures of semen and blood. A classification algorithm was developed for differentiating pure body fluids and their mixtures. The classification methodology is based on an effective combination of Support Vector Machine (SVM) regression (data selection) and SVM Discriminant Analysis of preprocessed experimental Raman spectra collected using an automatic mapping of the sample. This extensive cross-validation of the obtained results demonstrated that the detection limit of the minor contributor is as low as a few percent. The developed methodology can be further expanded to any binary mixture of complex solutions, including but not limited to mixtures of other body fluids. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Time-resolved spectral analysis of the pulsating helium star V652 Her
NASA Astrophysics Data System (ADS)
Jeffery, C. S.; Woolf, V. M.; Pollacco, D. L.
2001-09-01
A series of 59 moderate-resolution high signal-to-noise spectra of the pulsating helium star V652 Her covering 1.06 pulsation cycles was obtained with the William Herschel Telescope. These have been supplemented by archival ultraviolet and visual spectrophotometry and used to make a time-dependent study of the properties of V652 Her throughout the pulsation cycle. This study includes the following features: the most precise radial velocity curve for V652 Her measured so far, new software for the automatic measurement of effective temperature, surface gravity and projected rotation velocities from moderate-resolution spectra, self-consistent high-precision measurements of effective temperature and surface gravity around the pulsation cycle, a demonstration of excessive line-broadening at minimum radius and evidence for a pulsation-driven shock front, a new method for the direct measurement of the radius of a pulsating star using radial velocity and surface gravity measurements alone, new software for the automatic measurement of chemical abundances and microturbulent velocity, updated chemical abundances for V652 Her compared with previous work (\\cite{Jef99}), a reanalysis of the total flux variations (cf. \\cite{Lyn84}) in good agreement with previous work, and revised measurements of the stellar mass and radius which are similar to recent results for another pulsating helium star, BX Cir. Masses measured without reference to the ultraviolet fluxes turn out to be unphysically low (~0.18 M{\\odot}). The best estimate for the dimensions of V652 Her averaged over the pulsation cycle is given by: lt; Teff >=22 930+/-10 K and < log g > =3.46+/-0.05 (ionization equilibrium), < Teff > =20 950+/-70 K (total flux method), < R>=2.31+/-0.02 R{\\odot}, < L>=919+/-14 L{\\odot}, M=0.59+/-0.18 M{\\odot} and d=1.70+/-0.02 kpc. Two significant problems were encountered. The line-blanketed hydrogen-deficient model atmospheres used yield effective temperatures from the optical spectrum (ionization equilibrium) and visual and UV photometry (bolometric flux) that are inconsistent. Secondly, the IUE spectra are poorly distributed in phase and have low signal-to-noise. These problems may introduce systematic errors of up to 0.1 M{\\odot}. Based on observations obtained with the William Herschel Telescope, the United Kingdom Infrared Telescope, and on INES data from the IUE satellite.
Advanced dosimetry systems for the space transport and space station
NASA Technical Reports Server (NTRS)
Wailly, L. F.; Schneider, M. F.; Clark, B. C.
1972-01-01
Advanced dosimetry system concepts are described that will provide automated and instantaneous measurement of dose and particle spectra. Systems are proposed for measuring dose rate from cosmic radiation background to greater than 3600 rads/hr. Charged particle spectrometers, both internal and external to the spacecraft, are described for determining mixed field energy spectra and particle fluxes for both real time onboard and ground-based computer evaluation of the radiation hazard. Automated passive dosimetry systems consisting of thermoluminescent dosimeters and activation techniques are proposed for recording the dose levels for twelve or more crew members. This system will allow automatic onboard readout and data storage of the accumulated dose and can be transmitted to ground after readout or data records recovered with each crew rotation.
SLAM, a Mathematica interface for SUSY spectrum generators
NASA Astrophysics Data System (ADS)
Marquard, Peter; Zerf, Nikolai
2014-03-01
We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.
National Radar Conference, Los Angeles, CA, March 12, 13, 1986, Proceedings
NASA Astrophysics Data System (ADS)
The topics discussed include radar systems, radar subsystems, and radar signal processing. Papers are presented on millimeter wave radar for proximity fuzing of smart munitions, a solid state low pulse power ground surveillance radar, and the Radarsat prototype synthetic-aperture radar signal processor. Consideration is also given to automatic track quality assessment in ADT radar systems instrumentation of RCS measurements of modulation spectra of aircraft blades.
Youn, Jung-Ho; Drake, Steven K.; Weingarten, Rebecca A.; Frank, Karen M.; Dekker, John P.
2015-01-01
Rapid detection of blaKPC-containing organisms can significantly impact infection control and clinical practices, as well as therapeutic choices. Current molecular and phenotypic methods to detect these organisms, however, require additional testing beyond routine organism identification. In this study, we evaluated the clinical performance of matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) to detect pKpQIL_p019 (p019)—an ∼11,109-Da protein associated with certain blaKPC-containing plasmids that was previously shown to successfully track a clonal outbreak of blaKPC-pKpQIL-Klebsiella pneumoniae in a proof-of-principle study (A. F. Lau, H. Wang, R. A. Weingarten, S. K. Drake, A. F. Suffredini, M. K. Garfield, Y. Chen, M. Gucek, J. H. Youn, F. Stock, H. Tso, J. DeLeo, J. J. Cimino, K. M. Frank, and J. P. Dekker, J Clin Microbiol 52:2804–2812, 2014, http://dx.doi.org/10.1128/JCM.00694-14). PCR for the p019 gene was used as the reference method. Here, blind analysis of 140 characterized Enterobacteriaceae isolates using two protein extraction methods (plate extraction and tube extraction) and two peak detection methods (manual and automated) showed sensitivities and specificities ranging from 96% to 100% and from 95% to 100%, respectively (2,520 spectra analyzed). Feasible laboratory implementation methods (plate extraction and automated analysis) demonstrated 96% sensitivity and 99% specificity. All p019-positive isolates (n = 26) contained blaKPC and were carbapenem resistant. Retrospective analysis of an additional 720 clinical Enterobacteriaceae spectra found an ∼11,109-Da signal in nine spectra (1.3%), including seven from p019-containing, carbapenem-resistant isolates (positive predictive value [PPV], 78%). Instrument tuning had a significant effect on assay sensitivity, highlighting important factors that must be considered as MALDI-TOF MS moves into applications beyond microbial identification. Using a large blind clinical data set, we have shown that spectra acquired for routine organism identification can also be analyzed automatically in real time at high throughput, at no additional expense to the laboratory, to enable rapid detection of potentially blaKPC-containing carbapenem-resistant isolates, providing early and clinically actionable results. PMID:26338858
ERIC Educational Resources Information Center
Kuhn, Stephanie A. Contrucci; Triggs, Mandy
2009-01-01
Self-injurious behavior (SIB) that occurs at high rates across all conditions of a functional analysis can suggest automatic or multiple functions. In the current study, we conducted a functional analysis for 1 individual with SIB. Results indicated that SIB was, at least in part, maintained by automatic reinforcement. Further analyses using…
NASA Astrophysics Data System (ADS)
Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.
1995-04-01
It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
NASA Technical Reports Server (NTRS)
Lincoln, K. A.; Bechtel, R. D.
1986-01-01
Recent advances in commercially available data acquisition electronics embodying high speed A/D conversion coupled to increased memory storage have now made practical (at least within time intervals of a third of a millisecond or more) the capturing of all of the data generated by a high repetition rate time-of-flight mass spectrometer producing complete spectra every 25 to 35 microseconds. Such a system was assembled and interfaced with a personal computer for control and management of data. The applications are described for recording time-resolved spectra of individual vapor plumes induced from the pulsed-laser heating of material. Each laser pulse triggers the system to generate automatically a 3-dimensional (3-D) presentation of the time-resolved spectra with m/z labeling of the major mass peaks, plus an intensity versus time display of both the laser pulse and the resulting vapor pulse. The software also permits storing of data and its presentation in various additional forms.
Si, Jian-min; Luo, A-li; Wu, Fu-zhao; Wu, Yi-hong
2015-03-01
There are many valuable rare and unusual objects in spectra dataset of Sloan Digital Sky Survey (SDSS) Data Release eight (DR8), such as special white dwarfs (DZ, DQ, DC), carbon stars, white dwarf main-sequence binaries (WDMS), cataclysmic variable (CV) stars and so on, so it is extremely significant to search for rare and unusual celestial objects from massive spectra dataset. A novel algorithm based on Kernel dense estimation and K-nearest neighborhoods (KNN) has been presented, and applied to search for rare and unusual celestial objects from 546 383 stellar spectra of SDSS DR8. Their densities are estimated using Gaussian kernel density estimation, the top 5 000 spectra in descend order by their densities are selected as rare objects, and the top 300 000 spectra in ascend order by their densities are selected as normal objects. Then, KNN were used to classify the rest objects, and simultaneously K nearest neighbors of the 5 000 rare spectra are also selected as rare objects. As a result, there are totally 21 193 spectra selected as initial rare spectra, which include error spectra caused by deletion, redden, bad calibration, spectra consisting of different physically irrelevant components, planetary nebulas, QSOs, special white dwarfs (DZ, DQ, DC), carbon stars, white dwarf main-sequence binaries (WDMS), cataclysmic variable (CV) stars and so on. By cross identification with SIMBAD, NED, ADS and major literature, it is found that three DZ white dwarfs, one WDMS, two CVs with company of G-type star, three CVs candidates, six DC white dwarfs, one DC white dwarf candidate and one BL Lacertae (BL lac) candidate are our new findings. We also have found one special DA white dwarf with emission lines of Ca II triple and Mg I, and one unknown object whose spectrum looks like a late M star with emission lines and its image looks like a galaxy or nebula.
Edmands, William M B; Petrick, Lauren; Barupal, Dinesh K; Scalbert, Augustin; Wilson, Mark J; Wickliffe, Jeffrey K; Rappaport, Stephen M
2017-04-04
A long-standing challenge of untargeted metabolomic profiling by ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS) is efficient transition from unknown mass spectral features to confident metabolite annotations. The compMS 2 Miner (Comprehensive MS 2 Miner) package was developed in the R language to facilitate rapid, comprehensive feature annotation using a peak-picker-output and MS 2 data files as inputs. The number of MS 2 spectra that can be collected during a metabolomic profiling experiment far outweigh the amount of time required for pain-staking manual interpretation; therefore, a degree of software workflow autonomy is required for broad-scale metabolite annotation. CompMS 2 Miner integrates many useful tools in a single workflow for metabolite annotation and also provides a means to overview the MS 2 data with a Web application GUI compMS 2 Explorer (Comprehensive MS 2 Explorer) that also facilitates data-sharing and transparency. The automatable compMS 2 Miner workflow consists of the following steps: (i) matching unknown MS 1 features to precursor MS 2 scans, (ii) filtration of spectral noise (dynamic noise filter), (iii) generation of composite mass spectra by multiple similar spectrum signal summation and redundant/contaminant spectra removal, (iv) interpretation of possible fragment ion substructure using an internal database, (v) annotation of unknowns with chemical and spectral databases with prediction of mammalian biotransformation metabolites, wrapper functions for in silico fragmentation software, nearest neighbor chemical similarity scoring, random forest based retention time prediction, text-mining based false positive removal/true positive ranking, chemical taxonomic prediction and differential evolution based global annotation score optimization, and (vi) network graph visualizations, data curation, and sharing are made possible via the compMS 2 Explorer application. Metabolite identities and comments can also be recorded using an interactive table within compMS 2 Explorer. The utility of the package is illustrated with a data set of blood serum samples from 7 diet induced obese (DIO) and 7 nonobese (NO) C57BL/6J mice, which were also treated with an antibiotic (streptomycin) to knockdown the gut microbiota. The results of fully autonomous and objective usage of compMS 2 Miner are presented here. All automatically annotated spectra output by the workflow are provided in the Supporting Information and can alternatively be explored as publically available compMS 2 Explorer applications for both positive and negative modes ( https://wmbedmands.shinyapps.io/compMS2_mouseSera_POS and https://wmbedmands.shinyapps.io/compMS2_mouseSera_NEG ). The workflow provided rapid annotation of a diversity of endogenous and gut microbially derived metabolites affected by both diet and antibiotic treatment, which conformed to previously published reports. Composite spectra (n = 173) were autonomously matched to entries of the Massbank of North America (MoNA) spectral repository. These experimental and virtual (lipidBlast) spectra corresponded to 29 common endogenous compound classes (e.g., 51 lysophosphatidylcholines spectra) and were then used to calculate the ranking capability of 7 individual scoring metrics. It was found that an average of the 7 individual scoring metrics provided the most effective weighted average ranking ability of 3 for the MoNA matched spectra in spite of potential risk of false positive annotations emerging from automation. Minor structural differences such as relative carbon-carbon double bond positions were found in several cases to affect the correct rank of the MoNA annotated metabolite. The latest release and an example workflow is available in the package vignette ( https://github.com/WMBEdmands/compMS2Miner ) and a version of the published application is available on the shinyapps.io site ( https://wmbedmands.shinyapps.io/compMS2Example ).
NASA Astrophysics Data System (ADS)
Jiang, Yu; Li, Changying; Takeda, Fumiomi
2016-10-01
Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.
Jiang, Yu; Li, Changying; Takeda, Fumiomi
2016-10-21
Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.
A classification model of Hyperion image base on SAM combined decision tree
NASA Astrophysics Data System (ADS)
Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin
2009-10-01
Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.
Tidal analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data
2017-01-01
files, organized by location. The data were processed using the Python programming language (van Rossum and Drake 2001), the Pandas data analysis...ER D C/ CH L TR -1 7- 2 Coastal Inlets Research Program Tidal Analysis and Arrival Process Mining Using Automatic Identification System...17-2 January 2017 Tidal Analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data Brandan M. Scully Coastal and
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, Ru-yang; Li, Xiang-ru
2017-07-01
Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.
Peak picking NMR spectral data using non-negative matrix factorization
2014-01-01
Background Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. Results To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Conclusions Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap. PMID:24511909
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Ueda, Fumiaki; Aburano, Hiroyuki; Ryu, Yasuji; Yoshie, Yuichi; Nakada, Mitsutoshi; Hayashi, Yutaka; Matsui, Osamu; Gabata, Toshifumi
2017-07-10
The purpose of this study was to discriminate supratentorial intraventricular subependymoma (SIS) from central neurocytoma (CNC) using magnetic resonance spectroscopy (MRS). Single-voxel proton MRS using a 1.5T or 3T MR scanner from five SISs, five CNCs, and normal controls were evaluated. They were examined using a point-resolved spectroscopy. Automatically calculated ratios comparing choline (Cho), N-acetylaspartate (NAA), myoinositol (MI), and/or glycine (Gly) to creatine (Cr) were determined. Evaluation of Cr to unsuppressed water (USW) was also performed. Mann-Whitney U test was carried out to test the significance of differences in the metabolite ratios. Detectability of lactate (Lac) and alanine (Ala) was evaluated. Although a statistically significant difference (P < 0.0001) was observed in Cho/Cr among SIS, control spectra, and CNC, no statistical difference was noted between SIS and control spectra (P = 0.11). Statistically significant differences were observed in NAA/Cr between SIS and CNC (P = 0.04) or control spectra (P < 0.0001). A statistically significant difference was observed in MI and/or Gly to Cr between SIS and control spectra (P = 0.03), and CNC and control spectra (P < 0.0006). There were no statistical differences between SIS and CNC for MI and/or Gly to Cr (P = 0.32). Significant statistical differences were found between SIS and control spectra (P < 0.0053), control spectra and CNC (P < 0.0016), and SIS and CNC (P < 0.0083) for Cr to USW. Lac inverted doublets were confirmed in two SISs. Triplets of Lac and Ala were detected in four spectra of CNC. The present study showed that MRS can be useful in discriminating SIS from CNC.
Sensory Information Processing and Symbolic Computation
1973-12-31
plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
NASA Astrophysics Data System (ADS)
Zhang, Lili; Merényi, Erzsébet; Grundy, William M.; Young, Eliot F.
2010-07-01
The near-infrared spectra of icy volatiles collected from planetary surfaces can be used to infer surface parameters, which in turn may depend on the recent geologic history. The high dimensionality and complexity of the spectral data, the subtle differences between the spectra, and the highly nonlinear interplay between surface parameters make it often difficult to accurately derive these surface parameters. We use a neural machine, with a Self-Organizing Map (SOM) as its hidden layer, to infer the latent physical parameters, temperature and grain size from near-infrared spectra of crystalline H2O ice. The output layer of the SOM-hybrid machine is customarily trained with only the output from the SOM winner. We show that this scheme prevents simultaneous achievement of high prediction accuracies for both parameters. We propose an innovative neural architecture we call Conjoined Twins that allows multiple (k) SOM winners to participate in the training of the output layer and in which the customization of k can be limited automatically to a small range. With this novel machine we achieve scientifically useful accuracies, 83.0 ± 2.7% and 100.0 ± 0.0%, for temperature and grain size, respectively, from simulated noiseless spectra. We also show that the performance of the neural model is robust under various noisy conditions. A primary application of this prediction capability is planned for spectra returned from the Pluto-Charon system by New Horizons.
Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.
Frieauff, W; Martus, H J; Suter, W; Elhajouji, A
2013-01-01
The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.
DELINEATING SUBTYPES OF SELF-INJURIOUS BEHAVIOR MAINTAINED BY AUTOMATIC REINFORCEMENT
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.
2016-01-01
Self-injurious behavior (SIB) is maintained by automatic reinforcement in roughly 25% of cases. Automatically reinforced SIB typically has been considered a single functional category, and is less understood than socially reinforced SIB. Subtyping automatically reinforced SIB into functional categories has the potential to guide the development of more targeted interventions and increase our understanding of its biological underpinnings. The current study involved an analysis of 39 individuals with automatically reinforced SIB and a comparison group of 13 individuals with socially reinforced SIB. Automatically reinforced SIB was categorized into 3 subtypes based on patterns of responding in the functional analysis and the presence of self-restraint. These response features were selected as the basis for subtyping on the premise that they could reflect functional properties of SIB unique to each subtype. Analysis of treatment data revealed important differences across subtypes and provides preliminary support to warrant additional research on this proposed subtyping model. PMID:26223959
SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories
NASA Astrophysics Data System (ADS)
Zhang, M.; Collioud, A.; Charlot, P.
2018-02-01
We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.
Auto-tuning system for NMR probe with LabView
NASA Astrophysics Data System (ADS)
Quen, Carmen; Mateo, Olivia; Bernal, Oscar
2013-03-01
Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program is designed to analyze the detected power signal of an antenna near the NMR probe and use this analysis to automatically tune the sample coil to match the impedance of the spectrometer (50 Ω). The tuning capacitors of the probe are controlled by a stepper motor through a LabVIEW/computer interface. Our program calculates the area of the power signal as an indicator to control the motor so disconnecting the coil to tune it through a network analyzer is unnecessary. Work supported by NSF-DMR 1105380
SIG: a general-purpose signal processing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, D.; Azevedo, S.
1986-02-01
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. It also accommodates other representations for data such as transfer function polynomials. Signal processing operations include digital filtering, auto/cross spectral density, transfer function/impulse response, convolution, Fourier transform, and inverse Fourier transform. Graphical operations provide display of signals and spectra, including plotting, cursor zoom, families of curves, and multiple viewport plots. SIG provides two user interfaces with a menu mode for occasional users and a command mode for more experienced users. Capability exits for multiple commands per line, commandmore » files with arguments, commenting lines, defining commands, automatic execution for each item in a repeat sequence, etc. SIG is presently available for VAX(VMS), VAX (BERKELEY 4.2 UNIX), SUN (BERKELEY 4.2 UNIX), DEC-20 (TOPS-20), LSI-11/23 (TSX), and DEC PRO 350 (TSX). 4 refs., 2 figs.« less
Method for in-situ calibration of electrophoretic analysis systems
Liu, Changsheng; Zhao, Hequan
2005-05-08
An electrophoretic system having a plurality of separation lanes is provided with an automatic calibration feature in which each lane is separately calibrated. For each lane, the calibration coefficients map a spectrum of received channel intensities onto values reflective of the relative likelihood of each of a plurality of dyes being present. Individual peaks, reflective of the influence of a single dye, are isolated from among the various sets of detected light intensity spectra, and these can be used to both detect the number of dye components present, and also to establish exemplary vectors for the calibration coefficients which may then be clustered and further processed to arrive at a calibration matrix for the system. The system of the present invention thus permits one to use different dye sets to tag DNA nucleotides in samples which migrate in separate lanes, and also allows for in-situ calibration with new, previously unused dye sets.
Method of photon spectral analysis
Gehrke, Robert J.; Putnam, Marie H.; Killian, E. Wayne; Helmer, Richard G.; Kynaston, Ronnie L.; Goodwin, Scott G.; Johnson, Larry O.
1993-01-01
A spectroscopic method to rapidly measure the presence of plutonium in soils, filters, smears, and glass waste forms by measuring the uranium L-shell x-ray emissions associated with the decay of plutonium. In addition, the technique can simultaneously acquire spectra of samples and automatically analyze them for the amount of americium and .gamma.-ray emitting activation and fission products present. The samples are counted with a large area, thin-window, n-type germanium spectrometer which is equally efficient for the detection of low-energy x-rays (10-2000 keV), as well as high-energy .gamma. rays (>1 MeV). A 8192- or 16,384 channel analyzer is used to acquire the entire photon spectrum at one time. A dual-energy, time-tagged pulser, that is injected into the test input of the preamplifier to monitor the energy scale, and detector resolution. The L x-ray portion of each spectrum is analyzed by a linear-least-squares spectral fitting technique. The .gamma.-ray portion of each spectrum is analyzed by a standard Ge .gamma.-ray analysis program. This method can be applied to any analysis involving x- and .gamma.-ray analysis in one spectrum and is especially useful when interferences in the x-ray region can be identified from the .gamma.-ray analysis and accommodated during the x-ray analysis.
Method of photon spectral analysis
Gehrke, R.J.; Putnam, M.H.; Killian, E.W.; Helmer, R.G.; Kynaston, R.L.; Goodwin, S.G.; Johnson, L.O.
1993-04-27
A spectroscopic method to rapidly measure the presence of plutonium in soils, filters, smears, and glass waste forms by measuring the uranium L-shell x-ray emissions associated with the decay of plutonium. In addition, the technique can simultaneously acquire spectra of samples and automatically analyze them for the amount of americium and [gamma]-ray emitting activation and fission products present. The samples are counted with a large area, thin-window, n-type germanium spectrometer which is equally efficient for the detection of low-energy x-rays (10-2,000 keV), as well as high-energy [gamma] rays (>1 MeV). A 8,192- or 16,384 channel analyzer is used to acquire the entire photon spectrum at one time. A dual-energy, time-tagged pulser, that is injected into the test input of the preamplifier to monitor the energy scale, and detector resolution. The L x-ray portion of each spectrum is analyzed by a linear-least-squares spectral fitting technique. The [gamma]-ray portion of each spectrum is analyzed by a standard Ge [gamma]-ray analysis program. This method can be applied to any analysis involving x- and [gamma]-ray analysis in one spectrum and is especially useful when interferences in the x-ray region can be identified from the [gamma]-ray analysis and accommodated during the x-ray analysis.
NASA Technical Reports Server (NTRS)
Turner, T. J.; Weaver, K. A.; Mushotzky, R. F.; Holt, S. S.; Madejski, G. M.
1991-01-01
The X-ray spectra of 25 Seyfert galaxies measured with the Solid State Spectrometer on the Einstein Observatory have been investigated. This new investigation utilizes simultaneous data from the Monitor Proportional Counter, and automatic correction for systematic effects in the Solid State Spectrometer which were previously handled subjectively. It is found that the best-fit single-power-law indices generally agree with those previously reported, but that soft excesses of some form are inferred for about 48 percent of the sources. One possible explanation of the soft excess emission is a blend of soft X-ray lines, centered around 0.8 keV. The implications of these results for accretion disk models are discussed.
Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki
2011-01-01
A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535
Study of the cerrado vegetation in the Federal District area from orbital data. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Aoki, H.; Dossantos, J. R.
1980-01-01
The physiognomic units of cerrado in the area of Distrito Federal (DF) were studied through the visual and automatic analysis of products provided by Multispectral Scanning System (MSS) of LANDSAT. The visual analysis of the multispectral images in black and white, at the 1:250,000 scale, was made based on the texture and tonal patterns. The automatic analysis of the compatible computer tapes (CCT) was made by means of IMAGE-100 system. The following conclusions were obtained: (1) the delimitation of cerrado vegetation forms can be made by the visual and automatic analysis; (2) in the visual analysis, the principal parameter used to discriminate the cerrado forms was the tonal pattern, independently of the year's seasons, and the channel 5 gave better information; (3) in the automatic analysis, the data of the four channels of MSS can be used in the discrimination of the cerrado forms; and (4) in the automatic analysis, the four channels combination possibilities gave more information in the separation of cerrado units when soil types were considered.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ELSA: An integrated, semi-automated nebular abundance package
NASA Astrophysics Data System (ADS)
Johnson, Matthew D.; Levitt, Jesse S.; Henry, Richard B. C.; Kwitter, Karen B.
We present ELSA, a new modular software package, written in C, to analyze and manage spectroscopic data from emission-line objects. In addition to calculating plasma diagnostics and abundances from nebular emission lines, the software provides a number of convenient features including the ability to ingest logs produced by IRAF's splot task, to semi-automatically merge spectra in different wavelength ranges, and to automatically generate various data tables in machine-readable or LaTeX format. ELSA features a highly sophisticated interstellar reddening correction scheme that takes into account temperature and density effects as well as He II contamination of the hydrogen Balmer lines. Abundance calculations are performed using a 5-level atom approximation with recent atomic data, based on R. Henry's ABUN program. Downloading and detailed documentation for all aspects of ELSA are available at the following URL:
NASA Astrophysics Data System (ADS)
Powell, C. J.; Jablonski, A.; Werner, W. S. M.; Smekal, W.
2005-01-01
We describe two NIST databases that can be used to characterize thin films from Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS) measurements. First, the NIST Electron Effective-Attenuation-Length Database provides values of effective attenuation lengths (EALs) for user-specified materials and measurement conditions. The EALs differ from the corresponding inelastic mean free paths on account of elastic-scattering of the signal electrons. The database supplies "practical" EALs that can be used to determine overlayer-film thicknesses. Practical EALs are plotted as a function of film thickness, and an average value is shown for a user-selected thickness. The average practical EAL can be utilized as the "lambda parameter" to obtain film thicknesses from simple equations in which the effects of elastic-scattering are neglected. A single average practical EAL can generally be employed for a useful range of film thicknesses and for electron emission angles of up to about 60°. For larger emission angles, the practical EAL should be found for the particular conditions. Second, we describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA) to be released in 2004. This database provides data for many parameters needed in quantitative AES and XPS (e.g., excitation cross-sections, electron-scattering cross-sections, lineshapes, fluorescence yields, and backscattering factors). Relevant data for a user-specified experiment are automatically retrieved by a small expert system. In addition, Auger electron and photoelectron spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra, and thus, provide more detailed characterizations of multilayer thin-film materials. SESSA can also provide practical EALs, and we compare values provided by the NIST EAL database and SESSA for hafnium dioxide. Differences of up to 10% were found for film thicknesses less than 20 Å due to the use of different physical models in each database.
Military Role in Countering Terrorist Use of Weapons of Mass Destruction
1999-04-01
chemical and biological mobile point detection. “The M21 Remote Sensing Chemical Agent Alarm (RSCAAL) is an automatic scanning, passive infrared sensor...The M21 detects nerve and blister agent clouds based on changes in the background infrared spectra caused by the presence of the agent vapor.”15...required if greater than 3 years since last vaccine. VEE Yes Multiple vaccines required. VHF No Botulism Yes SEB No Ricin No Mycotoxin s No Source
NASA Astrophysics Data System (ADS)
Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa
We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.
Chen, Gengbo; Walmsley, Scott; Cheung, Gemmy C M; Chen, Liyan; Cheng, Ching-Yu; Beuerman, Roger W; Wong, Tien Yin; Zhou, Lei; Choi, Hyungwon
2017-05-02
Data independent acquisition-mass spectrometry (DIA-MS) coupled with liquid chromatography is a promising approach for rapid, automatic sampling of MS/MS data in untargeted metabolomics. However, wide isolation windows in DIA-MS generate MS/MS spectra containing a mixed population of fragment ions together with their precursor ions. This precursor-fragment ion map in a comprehensive MS/MS spectral library is crucial for relative quantification of fragment ions uniquely representative of each precursor ion. However, existing reference libraries are not sufficient for this purpose since the fragmentation patterns of small molecules can vary in different instrument setups. Here we developed a bioinformatics workflow called MetaboDIA to build customized MS/MS spectral libraries using a user's own data dependent acquisition (DDA) data and to perform MS/MS-based quantification with DIA data, thus complementing conventional MS1-based quantification. MetaboDIA also allows users to build a spectral library directly from DIA data in studies of a large sample size. Using a marine algae data set, we show that quantification of fragment ions extracted with a customized MS/MS library can provide as reliable quantitative data as the direct quantification of precursor ions based on MS1 data. To test its applicability in complex samples, we applied MetaboDIA to a clinical serum metabolomics data set, where we built a DDA-based spectral library containing consensus spectra for 1829 compounds. We performed fragment ion quantification using DIA data using this library, yielding sensitive differential expression analysis.
Automatic detection of spiculation of pulmonary nodules in computed tomography images
NASA Astrophysics Data System (ADS)
Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.
2015-03-01
We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Retinal vasculature classification using novel multifractal features
NASA Astrophysics Data System (ADS)
Ding, Y.; Ward, W. O. C.; Duan, Jinming; Auer, D. P.; Gowland, Penny; Bai, L.
2015-11-01
Retinal blood vessels have been implicated in a large number of diseases including diabetic retinopathy and cardiovascular diseases, which cause damages to retinal blood vessels. The availability of retinal vessel imaging provides an excellent opportunity for monitoring and diagnosis of retinal diseases, and automatic analysis of retinal vessels will help with the processes. However, state of the art vascular analysis methods such as counting the number of branches or measuring the curvature and diameter of individual vessels are unsuitable for the microvasculature. There has been published research using fractal analysis to calculate fractal dimensions of retinal blood vessels, but so far there has been no systematic research extracting discriminant features from retinal vessels for classifications. This paper introduces new methods for feature extraction from multifractal spectra of retinal vessels for classification. Two publicly available retinal vascular image databases are used for the experiments, and the proposed methods have produced accuracies of 85.5% and 77% for classification of healthy and diabetic retinal vasculatures. Experiments show that classification with multiple fractal features produces better rates compared with methods using a single fractal dimension value. In addition to this, experiments also show that classification accuracy can be affected by the accuracy of vessel segmentation algorithms.
VizieR Online Data Catalog: Photometry and spectroscopy of V501 Mon (Torres+, 2015)
NASA Astrophysics Data System (ADS)
Torres, G.; Lacy, C. H. S.; Pavlovski, K.; Fekel, F. C.; Muterspaugh, M. W.
2016-06-01
Spectroscopic observations of V501 Mon were carried out with three different instruments. They began at the Harvard-Smithsonian Center for Astrophysics (CfA) in 2005 November, using the now decommissioned Digital Speedometer (DS) mounted on the 1.5m Tillinghast reflector at the Fred L. Whipple Observatory on Mount Hopkins (AZ). Seven spectra were recorded through 2009 March with an intensified photon-counting Reticon detector, and cover a narrow span of 45Å centered at 5190Å (MgIb triplet). The resolving power of this instrument was R~35000, and the signal-to-noise ratios of the spectra range from 13 to 22 per resolution element of 8.5km/s. Thirty seven additional spectra were gathered from 2009 November to 2015 February with the Tillinghast Reflector Echelle Spectrograph (TRES) on the same telescope. This bench-mounted, fiber-fed instrument provides a resolving power of R~44000 in 51 orders over the wavelength span 3900-9100Å. The signal-to-noise ratios of the 37 spectra range from 8 to 56 per resolution element of 6.8km/s. The heliocentric velocities we obtained from the DS and TRES spectra are listed in Table2. Between 2011 October and 2015 February we also obtained 57 usable spectra of V501 Mon with the Tennessee State University 2m Automatic Spectroscopic Telescope (AST) and a fiber-fed echelle spectrograph at Fairborn Observatory in southeast Arizona. The detector for these observations was a Fairchild 486 CCD, with 15μm pixels in a 4096*4096 format. The spectrograms have 48 orders ranging from 3800 to 8260Å. Because of the faintness of V501 Mon (V=12.32), we used a fiber that produced a spectral resolution of 0.4Å, corresponding to a resolving power of 15000 at 6000Å. Our spectra have typical signal-to-noise ratios per resolution element of 40 at 6000Å. We list the final values in Table3. An extensive program of CCD photometry was carried out using the NFO WebScope ear Silver City, New Mexico, for the purpose of gathering an accurate V-band light curve of V501 Mon for analysis. A total of 6729 images were obtained over 281 nights between 2005 January and February. Table5 lists these differential photometric measurements. Measurements of the times of eclipse for V501 Mon collected from the literature are listed in Table1, and cover approximately seven decades. Abundances for individual elements are listed in Table7. (5 data files).
Yavuzer, Yasemin; Karataş, Zeynep
2013-01-01
This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
NASA Astrophysics Data System (ADS)
Rizzi, R.; Arosio, C.; Maestri, T.; Palchetti, L.; Bianchini, G.; Del Guasta, M.
2016-09-01
The present work examines downwelling radiance spectra measured at the ground during 2013 by a Far Infrared Fourier Transform Spectrometer at Dome C, Antarctica. A tropospheric backscatter and depolarization lidar is also deployed at same site, and a radiosonde system is routinely operative. The measurements allow characterization of the water vapor and clouds infrared properties in Antarctica under all sky conditions. In this paper we specifically discuss cloud detection and the analysis in clear sky condition, required for the discussion of the results obtained in cloudy conditions. First, the paper discusses the procedures adopted for the quality control of spectra acquired automatically. Then it describes the classification procedure used to discriminate spectra measured in clear sky from cloudy conditions. Finally a selection is performed and 66 clear cases, spanning the whole year, are compared to simulations. The computation of layer molecular optical depth is performed with line-by-line techniques and a convolution to simulate the Radiation Explorer in the Far InfraRed-Prototype for Applications and Development (REFIR-PAD) measurements; the downwelling radiance for selected clear cases is computed with a state-of-the-art adding-doubling code. The mean difference over all selected cases between simulated and measured radiance is within experimental error for all the selected microwindows except for the negative residuals found for all microwindows in the range 200 to 400 cm-1, with largest values around 295.1 cm-1. The paper discusses possible reasons for the discrepancy and identifies the incorrect magnitude of the water vapor total absorption coefficient as the cause of such large negative radiance bias below 400 cm-1.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
A Toolkit for Eye Recognition of LAMOST Spectroscopy
NASA Astrophysics Data System (ADS)
Yuan, H.; Zhang, H.; Zhang, Y.; Lei, Y.; Dong, Y.; Zhao, Y.
2014-05-01
The Large sky Area Multi-Object fiber Spectroscopic Telescope (LAMOST, also named the Guo Shou Jing Telescope) has finished the pilot survey and now begun the normal survey by the end of 2012 September. There have already been millions of targets observed, including thousands of quasar candidates. Because of the difficulty in the automatic identification of quasar spectra, eye recognition is always necessary and efficient. However massive spectra identification by eye is a huge job. In order to improve the efficiency and effectiveness of spectra , a toolkit for eye recognition of LAMOST spectroscopy is developed. Spectral cross-correlation templates from the Sloan Digital Sky Survey (SDSS) are applied as references, including O star, O/B transition star, B star, A star, F/A transition star, F star, G star, K star, M1 star, M3 star,M5 star,M8 star, L1 star, magnetic white dwarf, carbon star, white dwarf, B white dwarf, low metallicity K sub-dwarf, "Early-type" galaxy, galaxy, "Later-type" galaxy, Luminous Red Galaxy, QSO, QSO with some BAL activity and High-luminosity QSO. By adjusting the redshift and flux ratio of the template spectra in an interactive graphic interface, the spectral type of the target can be discriminated in a easy and feasible way and the redshift is estimated at the same time with a precision of about millesimal. The advantage of the tool in dealing with low quality spectra is indicated. Spectra from the Pilot Survey of LAMSOT are applied as examples and spectra from SDSS are also tested from comparison. Target spectra in both image format and fits format are supported. For convenience several spectra accessing manners are provided. All the spectra from LAMOST pilot survey can be located and acquired via the VOTable files on the internet as suggested by International Virtual Observatory Alliance (IVOA). After the construction of the Simple Spectral Access Protocol (SSAP) service by the Chinese Astronomical Data Center (CAsDC), spectra can be obtained and analyzed in a more efficient way.
Gordon, S H; Schudy, R B; Wheeler, B C; Wicklow, D T; Greene, R V
1997-04-01
Aspergillus flavus and other pathogenic fungi display typical infrared spectra which differ significantly from spectra of substrate materials such as corn. On this basis, specific spectral features have been identified which permit detection of fungal infection on the surface of corn kernels by photoacoustic infrared spectroscopy. In a blind study, ten corn kernels showing bright greenish yellow fluorescence (BGYF) in the germ or endosperm and ten BGYF-negative kernels were correctly classified as infected or not infected by Fourier transform infrared photoacoustic spectroscopy. Earlier studies have shown that BGYF-positive kernels contain the bulk of the aflatoxin contaminating grain at harvest. Ten major spectral features, identified by visual inspection of the photoacoustic spectra of A. flavus mycelium grown in culture versus uninfected corn, were interpreted and assigned by theoretical comparisons of the relative chemical compositions of fungi and corn. The spectral features can be built into either empirical or knowledge-based computer models (expert systems) for automatic infrared detection and segregation of grains or kernels containing aflatoxin from the food and feed supply.
VizieR Online Data Catalog: 05 through L3 empirical stellar spectra from SDSS (Kesseli+, 2017)
NASA Astrophysics Data System (ADS)
Kesseli, A. Y.; West, A. A.; Veyette, M.; Harrison, B.; Feldman, D.; Bochanski, J. J.
2017-08-01
We present a library of empirical stellar spectra created using spectra from the Sloan Digital Sky Survey's Baryon Oscillation Spectroscopic Survey. The templates cover spectral types O5 through L3, are binned by metallicity from -2.0dex through +1.0dex, and are separated into main-sequence (dwarf) stars and giant stars. With recently developed M dwarf metallicity indicators, we are able to extend the metallicity bins down through the spectral subtype M8, making this the first empirical library with this degree of temperature and metallicity coverage. The wavelength coverage for the templates is from 3650 to 10200Å at a resolution of better than R~2000. Using the templates, we identify trends in color space with metallicity and surface gravity, which will be useful for analyzing large data sets from upcoming missions like the Large Synoptic Survey Telescope. Along with the templates, we are releasing a code for automatically (and/or visually) identifying the spectral type and metallicity of a star. (3 data files).
FlavonoidSearch: A system for comprehensive flavonoid annotation by mass spectrometry.
Akimoto, Nayumi; Ara, Takeshi; Nakajima, Daisuke; Suda, Kunihiro; Ikeda, Chiaki; Takahashi, Shingo; Muneto, Reiko; Yamada, Manabu; Suzuki, Hideyuki; Shibata, Daisuke; Sakurai, Nozomu
2017-04-28
Currently, in mass spectrometry-based metabolomics, limited reference mass spectra are available for flavonoid identification. In the present study, a database of probable mass fragments for 6,867 known flavonoids (FsDatabase) was manually constructed based on new structure- and fragmentation-related rules using new heuristics to overcome flavonoid complexity. We developed the FlavonoidSearch system for flavonoid annotation, which consists of the FsDatabase and a computational tool (FsTool) to automatically search the FsDatabase using the mass spectra of metabolite peaks as queries. This system showed the highest identification accuracy for the flavonoid aglycone when compared to existing tools and revealed accurate discrimination between the flavonoid aglycone and other compounds. Sixteen new flavonoids were found from parsley, and the diversity of the flavonoid aglycone among different fruits and vegetables was investigated.
OKCARS : Oklahoma Collision Analysis and Response System.
DOT National Transportation Integrated Search
2012-10-01
By continuously monitoring traffic intersections to automatically detect that a collision or nearcollision : has occurred, automatically call for assistance, and automatically forewarn oncoming traffic, : our OKCARS has the capability to effectively ...
Automatic imitation: A meta-analysis.
Cracco, Emiel; Bardi, Lara; Desmet, Charlotte; Genschow, Oliver; Rigoni, Davide; De Coster, Lize; Radkova, Ina; Deschrijver, Eliane; Brass, Marcel
2018-05-01
Automatic imitation is the finding that movement execution is facilitated by compatible and impeded by incompatible observed movements. In the past 15 years, automatic imitation has been studied to understand the relation between perception and action in social interaction. Although research on this topic started in cognitive science, interest quickly spread to related disciplines such as social psychology, clinical psychology, and neuroscience. However, important theoretical questions have remained unanswered. Therefore, in the present meta-analysis, we evaluated seven key questions on automatic imitation. The results, based on 161 studies containing 226 experiments, revealed an overall effect size of g z = 0.95, 95% CI [0.88, 1.02]. Moderator analyses identified automatic imitation as a flexible, largely automatic process that is driven by movement and effector compatibility, but is also influenced by spatial compatibility. Automatic imitation was found to be stronger for forced choice tasks than for simple response tasks, for human agents than for nonhuman agents, and for goalless actions than for goal-directed actions. However, it was not modulated by more subtle factors such as animacy beliefs, motion profiles, or visual perspective. Finally, there was no evidence for a relation between automatic imitation and either empathy or autism. Among other things, these findings point toward actor-imitator similarity as a crucial modulator of automatic imitation and challenge the view that imitative tendencies are an indicator of social functioning. The current meta-analysis has important theoretical implications and sheds light on longstanding controversies in the literature on automatic imitation and related domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was performed with a subsample of 138 stars from the ELODIE stellar library, and the literature atmospheric parameters were recovered within 125 K for T eff, 0.10 dex for [Fe/H], and 0.29 dex for log g. These precisions are consistent with or better than those provided by the pipelines of surveys operating with similar resolutions. These results show that the spectral indices are a competitive tool to characterize stars with intermediate resolution spectra. Based on observations obtained with the 2.2 m MPG telescope at the European Southern Observatory (La Silla, Chile), under the agreement ESO-Observatório Nacional/MCT, and the Sloan Digital Sky Survey, which is owned and operated by the Astrophysical Research Consortium.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Automatic pickup of arrival time of channel wave based on multi-channel constraints
NASA Astrophysics Data System (ADS)
Wang, Bao-Li
2018-03-01
Accurately detecting the arrival time of a channel wave in a coal seam is very important for in-seam seismic data processing. The arrival time greatly affects the accuracy of the channel wave inversion and the computed tomography (CT) result. However, because the signal-to-noise ratio of in-seam seismic data is reduced by the long wavelength and strong frequency dispersion, accurately timing the arrival of channel waves is extremely difficult. For this purpose, we propose a method that automatically picks up the arrival time of channel waves based on multi-channel constraints. We first estimate the Jaccard similarity coefficient of two ray paths, then apply it as a weight coefficient for stacking the multichannel dispersion spectra. The reasonableness and effectiveness of the proposed method is verified in an actual data application. Most importantly, the method increases the degree of automation and the pickup precision of the channel-wave arrival time.
2011-06-01
peaks [32]. Spectroscopic voxels were classified using the stan- dardized scoring system proposed by Jung et al. [33] where 1 = definitely normal, 2...prostate with flyback echo-pla- nar encoding. Magn Reson Imaging 2007; 25: 1288-1292. 30. Schricker AA, Pauly JM, Kurhanewicz J et al. Dualband spec...method for automatic quantifi- cation of 1-D Spectra with low signal to noise ratio. J Magn Reson 1987; 75: 229-243. 33. Jung JA, Coakley FV, Vigneron DB
Jiang, Yu; Li, Changying; Takeda, Fumiomi
2016-01-01
Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 − 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising. PMID:27767050
NASA Astrophysics Data System (ADS)
Köhler, P.; Guanter, L.; Joiner, J.
2015-06-01
Global retrievals of near-infrared sun-induced chlorophyll fluorescence (SIF) have been achieved in the last few years by means of a number of space-borne atmospheric spectrometers. Here, we present a new retrieval method for medium spectral resolution instruments such as the Global Ozone Monitoring Experiment-2 (GOME-2) and the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY). Building upon the previous work by Guanter et al. (2013) and Joiner et al. (2013), our approach provides a solution for the selection of the number of free parameters. In particular, a backward elimination algorithm is applied to optimize the number of coefficients to fit, which reduces also the retrieval noise and selects the number of state vector elements automatically. A sensitivity analysis with simulated spectra has been utilized to evaluate the performance of our retrieval approach. The method has also been applied to estimate SIF at 740 nm from real spectra from GOME-2 and for the first time, from SCIAMACHY. We find a good correspondence of the absolute SIF values and the spatial patterns from the two sensors, which suggests the robustness of the proposed retrieval method. In addition, we compare our results to existing SIF data sets, examine uncertainties and use our GOME-2 retrievals to show empirically the relatively low sensitivity of the SIF retrieval to cloud contamination.
NASA Astrophysics Data System (ADS)
Köhler, P.; Guanter, L.; Joiner, J.
2014-12-01
Global retrievals of near-infrared sun-induced chlorophyll fluorescence (SIF) have been achieved in the last years by means of a number of space-borne atmospheric spectrometers. Here, we present a new retrieval method for medium spectral resolution instruments such as the Global Ozone Monitoring Experiment (GOME-2) and the SCanning Imaging Absorption SpectroMeter for Atmospheric ChartographY (SCIAMACHY). Building upon the previous work by Joiner et al. (2013), our approach solves existing issues in the retrieval such as the non-linearity of the forward model and the arbitrary selection of the number of free parameters. In particular, we use a backward elimination algorithm to optimize the number of coefficients to fit, which reduces also the retrieval noise and selects the number of state vector elements automatically. A sensitivity analysis with simulated spectra has been utilized to evaluate the performance of our retrieval approach. The method has also been applied to estimate SIF from real spectra from GOME-2 and for the first time, from SCIAMACHY. We find a good correspondence of the absolute SIF values and the spatial patterns from the two sensors, which suggests the robustness of the proposed retrieval method. In addition, we examine uncertainties and use our GOME-2 retrievals to show empirically the low sensitivity of the SIF retrieval to cloud contamination.
The volatile compound BinBase mass spectral database.
Skogerson, Kirsten; Wohlgemuth, Gert; Barupal, Dinesh K; Fiehn, Oliver
2011-08-04
Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. The volatile compound BinBase (vocBinBase) is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity) from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species). Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http://vocbinbase.fiehnlab.ucdavis.edu). The BinBase database algorithms have been successfully modified to allow for tracking and identification of volatile compounds in complex mixtures. The database is capable of annotating large datasets (hundreds to thousands of samples) and is well-suited for between-study comparisons such as chemotaxonomy investigations. This novel volatile compound database tool is applicable to research fields spanning chemical ecology to human health. The BinBase source code is freely available at http://binbase.sourceforge.net/ under the LGPL 2.0 license agreement.
The volatile compound BinBase mass spectral database
2011-01-01
Background Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. Description The volatile compound BinBase (vocBinBase) is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity) from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species). Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http://vocbinbase.fiehnlab.ucdavis.edu). Conclusions The BinBase database algorithms have been successfully modified to allow for tracking and identification of volatile compounds in complex mixtures. The database is capable of annotating large datasets (hundreds to thousands of samples) and is well-suited for between-study comparisons such as chemotaxonomy investigations. This novel volatile compound database tool is applicable to research fields spanning chemical ecology to human health. The BinBase source code is freely available at http://binbase.sourceforge.net/ under the LGPL 2.0 license agreement. PMID:21816034
Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis
NASA Astrophysics Data System (ADS)
Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.
2013-11-01
Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.
NASA Astrophysics Data System (ADS)
Morton, Daniel R.
Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation method could potentially be used to predict patients' exposure factors before their treatment begins, thus preventing the need for multiple exposures.
Expert System for Analysis of Spectra in Nuclear Metrology
NASA Astrophysics Data System (ADS)
Petrović, Ivan; Petrović, V.; Krstić, D.; Nikezić, D.; Bočvarski, V.
In this paper is described an expert system (ES) developed in order to enable the analysis of emission spectra, which are obtained by measurements of activities of radioactive elements, i.e., isotopes, actually cesium. In the structure of those spectra exists two parts: first on lower energies, which originates from the Compton effect, and second on higher energies, which contains the photopeak. The aforementioned ES is made to perform analysis of spectra in whole range of energies. Analysis of those spectra is very interesting because of the problem of environmental contamination by radio nuclides.
NASA Technical Reports Server (NTRS)
Goldman, A.; Murcray, F. J.; Rinsland, C. P.; Blatherwick, R. D.; Murcray, F. H.; Murcray, D. G.
1991-01-01
Recent results and ongoing studies of high resolution solar absorption spectra will be presented. The analysis of these spectra is aimed at the identification and quantification of trace constituents important in atmospheric chemistry of the stratosphere and upper troposphere. Analysis of balloon-borne and ground-based spectra obtained at 0.0025/ cm covering the 700-2200/ cm interval will be presented. Results from ground-based 0.02/ cm solar spectra, from several locations such as Denver, South Pole, M. Loa, and New Zealand will also be shown. The 0.0025/ cm spectra show many new spectroscopic features. The analysis of these spectra, along with corresponding laboratory spectra, improves the spectral line parameters, and thus the accuracy of trace constituents quantification. The combination of the recent balloon flights, with earlier flights data since 1978 at 0.02/ cm resolution, provides trends analysis of several stratospheric trace species. Results for COF2, F22, SF6, and other species will be presented. Analysis of several ground-based solar spectra provides trends for HCl, HF and other species. The retrieval methods used for total column density and altitude distribution for both ground-based and balloon-borne spectra will be presented. These are extended for the analysis of the ground-based spectra to be obtained by the high resolution interferometers of the Network for Detection of Stratospheric Change (NDSC). Progress or the University of Denver studies for the NDSC will be presented. This will include intercomparison of solar spectra and trace gases retrievals obtained from simultaneous scans by the high resolution (0.0025/ cm) interferometers of BRUKER and BOMEM.
[Galaxy/quasar classification based on nearest neighbor method].
Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun
2011-09-01
With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.
Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong
2017-01-01
Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.
NASA Astrophysics Data System (ADS)
Malavolta, Luca
2013-10-01
Large astronomical facilities usually provide data reduction pipeline designed to deliver ready-to-use scientific data, and too often as- tronomers are relying on this to avoid the most difficult part of an astronomer job Standard data reduction pipelines however are usu- ally designed and tested to have good performance on data with av- erage Signal to Noise Ratio (SNR) data, and the issues that are related with the reduction of data in the very low SNR domain are not taken int account properly. As a result, informations in data with low SNR are not optimally exploited. During the last decade our group has collected thousands of spec- tra using the GIRAFFE spectrograph at Very Large Telescope (Chile) of the European Southern Observatory (ESO) to determine the ge- ometrical distance and dynamical state of several Galactic Globular Clusters but ultimately the analysis has been hampered by system- atics in data reduction, calibration and radial velocity measurements. Moreover these data has never been exploited to get other informa- tions like temperature and metallicity of stars, because considered too noisy for these kind of analyses. In this thesis we focus our attention on data reduction and analysis of spectra with very low SNR. The dataset we analyze in this thesis comprises 7250 spectra for 2771 stars of the Globular Cluster M 4 (NGC 6121) in the wavelength region 5145-5360Å obtained with GIRAFFE. Stars from the upper Red Giant Branch down to the Main Sequence have been observed in very different conditions, including nights close to full moon, and reaching SNR - 10 for many spectra in the dataset. We will first review the basic steps of data reduction and spec- tral extraction, adapting techniques well tested in other field (like photometry) but still under-developed in spectroscopy. We improve the wavelength dispersion solution and the correction of radial veloc- ity shift between day-time calibrations and science observations by following a completely different approach with respect to the ESO pipeline. We then analyze deeply the best way to perform sky sub- traction and continuum normalization, the most important sources respectively of noise and systematics in radial velocity determination and chemical analysis of spectra. The huge number of spectra of our dataset requires an automatic but robust approach, which we do not fail to provide. We finally determine radial velocities for the stars in the sample with unprecedented precision with respect to previous works with similar data and we recover the same stellar atmosphere parameters of other studies performed on the same cluster but on brighter stars, with higher spectral resolution and wavelength range ten times larger than our data. In the final chapter of the thesis we face a similar problem but from a completely different perspective. High resolution, high SNR data from the High Accuracy Radial Velocity Planet Searcher spectro- graph (HARPS) in La Silla (Chile) have been used to calibrate the at- mospheric stellar parameters as functions of the main characteristics of Cross-Correlation Functions, specifically built by including spec- tral lines with different sensitivity to stellar atmosphere parameters. These tools has been designed to be quick and to be easy to imple- ment in a instrument pipeline for a real-time determination, neverthe- less they provide accurate parameters even for lower SNR spectra.
Zu, Qin; Zhang, Shui-fa; Cao, Yang; Zhao, Hui-yi; Dang, Chang-qing
2015-02-01
Weeds automatic identification is the key technique and also the bottleneck for implementation of variable spraying and precision pesticide. Therefore, accurate, rapid and non-destructive automatic identification of weeds has become a very important research direction for precision agriculture. Hyperspectral imaging system was used to capture the hyperspectral images of cabbage seedlings and five kinds of weeds such as pigweed, barnyard grass, goosegrass, crabgrass and setaria with the wavelength ranging from 1000 to 2500 nm. In ENVI, by utilizing the MNF rotation to implement the noise reduction and de-correlation of hyperspectral data and reduce the band dimensions from 256 to 11, and extracting the region of interest to get the spectral library as standard spectra, finally, using the SAM taxonomy to identify cabbages and weeds, the classification effect was good when the spectral angle threshold was set as 0. 1 radians. In HSI Analyzer, after selecting the training pixels to obtain the standard spectrum, the SAM taxonomy was used to distinguish weeds from cabbages. Furthermore, in order to measure the recognition accuracy of weeds quantificationally, the statistical data of the weeds and non-weeds were obtained by comparing the SAM classification image with the best classification effects to the manual classification image. The experimental results demonstrated that, when the parameters were set as 5-point smoothing, 0-order derivative and 7-degree spectral angle, the best classification result was acquired and the recognition rate of weeds, non-weeds and overall samples was 80%, 97.3% and 96.8% respectively. The method that combined the spectral imaging technology and the SAM taxonomy together took full advantage of fusion information of spectrum and image. By applying the spatial classification algorithms to establishing training sets for spectral identification, checking the similarity among spectral vectors in the pixel level, integrating the advantages of spectra and images meanwhile considering their accuracy and rapidity and improving weeds detection range in the full range that could detect weeds between and within crop rows, the above method contributes relevant analysis tools and means to the application field requiring the accurate information of plants in agricultural precision management
Use of mutation spectra analysis software.
Rogozin, I; Kondrashov, F; Glazko, G
2001-02-01
The study and comparison of mutation(al) spectra is an important problem in molecular biology, because these spectra often reflect on important features of mutations and their fixation. Such features include the interaction of DNA with various mutagens, the function of repair/replication enzymes, and properties of target proteins. It is known that mutability varies significantly along nucleotide sequences, such that mutations often concentrate at certain positions, called "hotspots," in a sequence. In this paper, we discuss in detail two approaches for mutation spectra analysis: the comparison of mutation spectra with a HG-PUBL program, (FTP: sunsite.unc.edu/pub/academic/biology/dna-mutations/hyperg) and hotspot prediction with the CLUSTERM program (www.itba.mi.cnr.it/webmutation; ftp.bionet.nsc.ru/pub/biology/dbms/clusterm.zip). Several other approaches for mutational spectra analysis, such as the analysis of a target protein structure, hotspot context revealing, multiple spectra comparisons, as well as a number of mutation databases are briefly described. Mutation spectra in the lacI gene of E. coli and the human p53 gene are used for illustration of various difficulties of such analysis. Copyright 2001 Wiley-Liss, Inc.
Review of automatic detection of pig behaviours by using image analysis
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao
2017-06-01
Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.
National Institute of Standards and Technology Data Gateway
SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase) This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.
Automatic detection of aflatoxin contaminated corn kernels using dual-band imagery
NASA Astrophysics Data System (ADS)
Ononye, Ambrose E.; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert L.; Cleveland, Thomas E.
2009-05-01
Aflatoxin is a mycotoxin predominantly produced by Aspergillus flavus and Aspergillus parasitiucus fungi that grow naturally in corn, peanuts and in a wide variety of other grain products. Corn, like other grains is used as food for human and feed for animal consumption. It is known that aflatoxin is carcinogenic; therefore, ingestion of corn infected with the toxin can lead to very serious health problems such as liver damage if the level of the contamination is high. The US Food and Drug Administration (FDA) has strict guidelines for permissible levels in the grain products for both humans and animals. The conventional approach used to determine these contamination levels is one of the destructive and invasive methods that require corn kernels to be ground and then chemically analyzed. Unfortunately, each of the analytical methods can take several hours depending on the quantity, to yield a result. The development of high spectral and spatial resolution imaging sensors has created an opportunity for hyperspectral image analysis to be employed for aflatoxin detection. However, this brings about a high dimensionality problem as a setback. In this paper, we propose a technique that automatically detects aflatoxin contaminated corn kernels by using dual-band imagery. The method exploits the fluorescence emission spectra from corn kernels captured under 365 nm ultra-violet light excitation. Our approach could lead to a non-destructive and non-invasive way of quantifying the levels of aflatoxin contamination. The preliminary results shown here, demonstrate the potential of our technique for aflatoxin detection.
Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.
Denecke, Kerstin
2016-01-01
Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved.
Automatic analysis of microscopic images of red blood cell aggregates
NASA Astrophysics Data System (ADS)
Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.
2015-06-01
Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.
2017-01-01
Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344
Monitoring Telluric Absorption with CAMAL
NASA Astrophysics Data System (ADS)
Baker, Ashley D.; Blake, Cullen H.; Sliski, David H.
2017-08-01
Ground-based astronomical observations may be limited by telluric water vapor absorption, which is highly variable in time and significantly complicates both spectroscopy and photometry in the near-infrared (NIR). To achieve the sensitivity required to detect Earth-sized exoplanets in the NIR, simultaneous monitoring of precipitable water vapor (PWV) becomes necessary to mitigate the impact of variable telluric lines on radial velocity measurements and transit light curves. To address this issue, we present the Camera for the Automatic Monitoring of Atmospheric Lines (CAMAL), a stand-alone, inexpensive six-inch aperture telescope dedicated to measuring PWV at the Fred Lawrence Whipple Observatory on Mount Hopkins. CAMAL utilizes three narrowband NIR filters to trace the amount of atmospheric water vapor affecting simultaneous observations with the MINiature Exoplanet Radial Velocity Array (MINERVA) and MINERVA-Red telescopes. Here, we present the current design of CAMAL, discuss our data analysis methods, and show results from 11 nights of PWV measurements taken with CAMAL. For seven nights of data we have independent PWV measurements extracted from high-resolution stellar spectra taken with the Tillinghast Reflector Echelle Spectrometer (TRES) also located on Mount Hopkins. We use the TRES spectra to calibrate the CAMAL absolute PWV scale. Comparisons between CAMAL and TRES PWV estimates show excellent agreement, matching to within 1 mm over a 10 mm range in PWV. Analysis of CAMAL’s photometric precision propagates to PWV measurements precise to better than 0.5 mm in dry (PWV < 4 mm) conditions. We also find that CAMAL-derived PWVs are highly correlated with those from a GPS-based water vapor monitor located approximately 90 km away at Kitt Peak National Observatory, with a root mean square PWV difference of 0.8 mm.
PELAN: a pulsed neutron portable probe for UXO and land mine identification
NASA Astrophysics Data System (ADS)
Vourvopoulos, George; Womble, Phillip C.; Paschal, Jonathon
2000-12-01
There has been much work increasing the sensitivity of detecting metallic objects in soils and other environments. This has lead to a problem in discriminating unexploded ordnance (UXO) and landmines form other metallic clutter. PELAN is a small portable system for the detection of explosives. PELAN weights less than 45 kg and is man portable. It is based on the principle that explosives and other contraband contain various chemical elements such as H, C, N, O, etc. in quantities and ratios that differentiate them from other innocuous substances. The pulsed neutrons are produced with a 14 MeV neutron generator. Separate gamma-ray spectra form fast neutron, thermal neutron and activation reactions are accumulated and analyzed to determine elemental content. The data analysis is performed in an automatic manner and a result of whether a threat is present is returned to the operator. PELAN has successfully undergone field demonstrations for explosive detection. In this paper, we will discuss the application of PELAN to the problem of differentiating threats from metallic clutter.
A general algorithm for peak-tracking in multi-dimensional NMR experiments.
Ravel, P; Kister, G; Malliavin, T E; Delsuc, M A
2007-04-01
We present an algorithmic method allowing automatic tracking of NMR peaks in a series of spectra. It consists in a two phase analysis. The first phase is a local modeling of the peak displacement between two consecutive experiments using distance matrices. Then, from the coefficients of these matrices, a value graph containing the a priori set of possible paths used by these peaks is generated. On this set, the minimization under constraint of the target function by a heuristic approach provides a solution to the peak-tracking problem. This approach has been named GAPT, standing for General Algorithm for NMR Peak Tracking. It has been validated in numerous simulations resembling those encountered in NMR spectroscopy. We show the robustness and limits of the method for situations with many peak-picking errors, and presenting a high local density of peaks. It is then applied to the case of a temperature study of the NMR spectrum of the Lipid Transfer Protein (LTP).
The geophysical processor system: Automated analysis of ERS-1 SAR imagery
NASA Technical Reports Server (NTRS)
Stern, Harry L.; Rothrock, D. Andrew; Kwok, Ronald; Holt, Benjamin
1994-01-01
The Geophysical Processor System (GPS) at the Alaska (U.S.) SAR (Synthetic Aperture Radar) Facility (ASF) uses ERS-1 SAR images as input to generate three types of products: sea ice motion, sea ice type, and ocean wave spectra. The GPS, operating automatically with minimal human intervention, delivers its output to the Archive and Catalog System (ACS) where scientists can search and order the products on line. The GPS has generated more than 10,000 products since it became operational in Feb. 1992, and continues to deliver 500 new products per month to the ACS. These products cover the Beaufort and Chukchi Seas and the western portion of the central Arctic Ocean. More geophysical processing systems are needed to handle the large volumes of data from current and future satellites. Images must be routinely and consistently analyzed to yield useful information for scientists. The current GPS is a good, working prototype on the way to more sophisticated systems.
SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
Trust, control strategies and allocation of function in human-machine systems.
Lee, J; Moray, N
1992-10-01
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Comparison of automatic control systems
NASA Technical Reports Server (NTRS)
Oppelt, W
1941-01-01
This report deals with a reciprocal comparison of an automatic pressure control, an automatic rpm control, an automatic temperature control, and an automatic directional control. It shows the difference between the "faultproof" regulator and the actual regulator which is subject to faults, and develops this difference as far as possible in a parallel manner with regard to the control systems under consideration. Such as analysis affords, particularly in its extension to the faults of the actual regulator, a deep insight into the mechanism of the regulator process.
Spectra-first feature analysis in clinical proteomics - A case study in renal cancer.
Goh, Wilson Wen Bin; Wong, Limsoon
2016-10-01
In proteomics, useful signal may be unobserved or lost due to the lack of confident peptide-spectral matches. Selection of differential spectra, followed by associative peptide/protein mapping may be a complementary strategy for improving sensitivity and comprehensiveness of analysis (spectra-first paradigm). This approach is complementary to the standard approach where functional analysis is performed only on the finalized protein list assembled from identified peptides from the spectra (protein-first paradigm). Based on a case study of renal cancer, we introduce a simple spectra-binning approach, MZ-bin. We demonstrate that differential spectra feature selection using MZ-bin is class-discriminative and can trace relevant proteins via spectra associative mapping. Moreover, proteins identified in this manner are more biologically coherent than those selected directly from the finalized protein list. Analysis of constituent peptides per protein reveals high expression inconsistency, suggesting that the measured protein expressions are in fact, poor approximations of true protein levels. Moreover, analysis at the level of constituent peptides may provide higher resolution insight into the underlying biology: Via MZ-bin, we identified for the first time differential splice forms for the known renal cancer marker MAPT. We conclude that the spectra-first analysis paradigm is a complementary strategy to the traditional protein-first paradigm and can provide deeper level insight.
Using a fast-neutron spectrometer system to candle luggage for hidden explosives
NASA Astrophysics Data System (ADS)
Lefevre, Harlan W.; Rasmussen, R. J.; Chmelik, Michael S.; Schofield, R. M. S.; Sieger, G. E.; Overley, Jack C.
1997-02-01
A continuous spectrum of neutron switch energies up to 8.2 MeV is produced by a 4.2-MeV nanosecond-pulsed deuteron beam slowing down in a thick beryllium target. The spectrum form the locally shielded target is collimated to a horizontal fan-beam and delivered to a row of 16, 6-cm square plastic scintillators located 4 m from the neutron source. The scintillators are coupled to 12-stage photomultiplier tubes, constant-fraction discriminators, time-to-amplitude converters, analog-to-digital converters, and digital memories. Unattenuated neutron-source spectra and background spectra ar recorded. Luggage is stepped through the fan beam by an automated lift located 2 m from the neutron source. Transmission spectra are measured, and are transferred to a computer while the location is advanced one pixel width. As the next set of spectra is being measured, the computer calculates neutron attenuations for the previous set, deconvolutes attenuations into projected elemental number densities, and determines the explosive likelihood for each pixel. With a time-averaged deuteron beam current o 1(mu) A, a suitcase 60-cm long can be automatically imaged in 1600s. We will suggest that time can be reduced to 8s or less with straight-forward improvements. The following paper describes the explosives recognition algorithm and presents the results of teste with explosives.
View_SPECPR: Software for Plotting Spectra (Installation Manual and User's Guide, Version 1.2)
Kokaly, Raymond F.
2008-01-01
This document describes procedures for installing and using the 'View_SPECPR' software system to plot spectra stored in SPECPR (SPECtrum Processing Routines) files. The View_SPECPR software is comprised of programs written in IDL (Interactive Data Language) that run within the ENVI (ENvironment for Visualizing Images) image processing system. SPECPR files are used by earth-remote-sensing scientists and planetary scientists for storing spectra collected by laboratory, field, and remote sensing instruments. A widely distributed SPECPR file is the U.S. Geological Survey (USGS) spectral library that contains thousands of spectra of minerals, vegetation, and man-made materials (Clark and others, 2007). SPECPR files contain reflectance data and associated wavelength and spectral resolution data, as well as meta-data on the time and date of collection and spectrometer settings. Furthermore, the SPECPR file automatically tracks changes to data records through its 'history' fields. For more details on the format and content of SPECPR files, see Clark (1993). For more details on ENVI, see ITT (2008). This program has been updated using an ENVI 4.5/IDL7.0 full license operating on a Windows XP operating system and requires the installation of the iTools components of IDL7.0; however, this program should work with full licenses on UNIX/LINUX systems. This software has not been tested with ENVI licenses on Windows Vista or Apple Operating Systems.
Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.
Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata
2016-10-07
Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman rho higher than 0.95. The proposed hot-spot selection algorithm with an extended context-based analysis of whole slide images and hot-spot gradual extinction algorithm provides an efficient tool for simulation of a manual examination. The presented results have confirmed that the automatic examination of Ki-67 in meningiomas could be introduced in the near future.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
A sensitive continuum analysis method for gamma ray spectra
NASA Technical Reports Server (NTRS)
Thakur, Alakh N.; Arnold, James R.
1993-01-01
In this work we examine ways to improve the sensitivity of the analysis procedure for gamma ray spectra with respect to small differences in the continuum (Compton) spectra. The method developed is applied to analyze gamma ray spectra obtained from planetary mapping by the Mars Observer spacecraft launched in September 1992. Calculated Mars simulation spectra and actual thick target bombardment spectra have been taken as test cases. The principle of the method rests on the extraction of continuum information from Fourier transforms of the spectra. We study how a better estimate of the spectrum from larger regions of the Mars surface will improve the analysis for smaller regions with poorer statistics. Estimation of signal within the continuum is done in the frequency domain which enables efficient and sensitive discrimination of subtle differences between two spectra. The process is compared to other methods for the extraction of information from the continuum. Finally we explore briefly the possible uses of this technique in other applications of continuum spectra.
Chamrad, Daniel C; Körting, Gerhard; Schäfer, Heike; Stephan, Christian; Thiele, Herbert; Apweiler, Rolf; Meyer, Helmut E; Marcus, Katrin; Blüggel, Martin
2006-09-01
A novel software tool named PTM-Explorer has been applied to LC-MS/MS datasets acquired within the Human Proteome Organisation (HUPO) Brain Proteome Project (BPP). PTM-Explorer enables automatic identification of peptide MS/MS spectra that were not explained in typical sequence database searches. The main focus was detection of PTMs, but PTM-Explorer detects also unspecific peptide cleavage, mass measurement errors, experimental modifications, amino acid substitutions, transpeptidation products and unknown mass shifts. To avoid a combinatorial problem the search is restricted to a set of selected protein sequences, which stem from previous protein identifications using a common sequence database search. Prior to application to the HUPO BPP data, PTM-Explorer was evaluated on excellently manually characterized and evaluated LC-MS/MS data sets from Alpha-A-Crystallin gel spots obtained from mouse eye lens. Besides various PTMs including phosphorylation, a wealth of experimental modifications and unspecific cleavage products were successfully detected, completing the primary structure information of the measured proteins. Our results indicate that a large amount of MS/MS spectra that currently remain unidentified in standard database searches contain valuable information that can only be elucidated using suitable software tools.
Headway Deviation Effects on Bus Passenger Loads : Analysis of Tri-Met's Archived AVL-APC Data
DOT National Transportation Integrated Search
2003-01-01
In this paper we empirically analyze the relationship between transit service headway deviations and passenger loads, using archived data from Tri-Met's automatic vehicle location and automatic passenger counter systems. The analysis employs twostage...
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
Lacson, Ronilda C; Barzilay, Regina; Long, William J
2006-10-01
Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
FITspec: A New Algorithm for the Automated Fit of Synthetic Stellar Spectra for OB Stars
NASA Astrophysics Data System (ADS)
Fierro-Santillán, Celia R.; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago A.; Arrieta, Anabel; Arias, Lorena; Sigalotti, Leonardo Di G.
2018-06-01
In this paper we describe the FITspec code, a data mining tool for the automatic fitting of synthetic stellar spectra. The program uses a database of 27,000 CMFGEN models of stellar atmospheres arranged in a six-dimensional (6D) space, where each dimension corresponds to one model parameter. From these models a library of 2,835,000 synthetic spectra were generated covering the ultraviolet, optical, and infrared regions of the electromagnetic spectrum. Using FITspec we adjust the effective temperature and the surface gravity. From the 6D array we also get the luminosity, the metallicity, and three parameters for the stellar wind: the terminal velocity ({v}∞ ), the β exponent of the velocity law, and the clumping filling factor (F cl). Finally, the projected rotational velocity (v\\cdot \\sin i) can be obtained from the library of stellar spectra. Validation of the algorithm was performed by analyzing the spectra of a sample of eight O-type stars taken from the IACOB spectroscopic survey of Northern Galactic OB stars. The spectral lines used for the adjustment of the analyzed stars are reproduced with good accuracy. In particular, the effective temperatures calculated with the FITspec are in good agreement with those derived from spectral type and other calibrations for the same stars. The stellar luminosities and projected rotational velocities are also in good agreement with previous quantitative spectroscopic analyses in the literature. An important advantage of FITspec over traditional codes is that the time required for spectral analyses is reduced from months to a few hours.
NASA Astrophysics Data System (ADS)
Jamróz, M. H.; Dobrowolski, J. Cz.
2001-05-01
For the most stable Li, Na, and Cu(I) diformates we present the vibrational spectra, supported by potential energy distribution (PED) analysis, and the interaction energies between formic acid and metal formate by the DFT (B3PW91) method. PED analysis of the theoretical spectra forms the basis for the elucidation of the future matrix isolation IR spectra.
Wang, Zhibin; Cao, Yanzhong; Ge, Na; Liu, Xiaomao; Chang, Qiaoying; Fan, Chunlin; Pang, Guo-Fang
2016-11-01
This paper presents an application of ultra-high performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS) for simultaneous screening and identification of 427 pesticides in fresh fruit and vegetable samples. Both full MS scan mode for quantification, and an artificial-intelligence-based product ion scan mode information-dependent acquisition (IDA) providing automatic MS to MS/MS switching of product ion spectra for identification, were conducted by one injection. A home-in collision-induced-dissociation all product ions accurate mass spectra library containing more than 1700 spectra was developed prior to actual application. Both qualitative and quantitative validations of the method were carried out. The result showed that 97.4 % of the pesticides had the screening detection limit (SDL) less than 50 μg kg -1 and more than 86.7 % could be confirmed by accurate MS/MS spectra embodied in the home-made library. Meanwhile, calibration curves covering two orders of magnitude were performed, and they were linear over the concentration range studied for the selected matrices (from 5 to 500 μg kg -1 for most of the pesticides). Recoveries between 80 and 110 % in four matrices (apple, orange, tomato, and spinach) at two spiked levels, 10 and 100 μg kg -1 , was 88.7 or 86.8 %. Furthermore, the overall relative standard deviation (RSD, n = 12) for 94.3 % of the pesticides in 10 μg kg -1 and 98.1 % of the pesticides in 100 μg kg -1 spiked levels was less than 20 %. In order to validate the suitability for routine analysis, the method was applied to 448 fruit and vegetable samples purchased in different local markets. The results show 83.3 % of the analyzed samples have positive findings (higher than the limits of identification and quantification), and 412 commodity-pesticide combinations are identified in our scope. The approach proved to be a cost-effective, time-saving and powerful strategy for routine large-scope screening of pesticides.
NASA Astrophysics Data System (ADS)
Sakho, I.
2014-01-01
Energy positions and quantum defects of the 4s24p4 (1D2,1S0) ns, nd Rydberg series originating from the 4s24p52P3/2∘ ground state and from the 4s24p52P1/2∘ metastable state of Kr+ are reported. Calculations are performed using the Screening Constant by Unit Nuclear Charge (SCUNC) method. The results obtained are in suitable agreement with recent experimental data from the combined ASTRID merged-beam set up and Fourier Transform Ion Cyclotron Resonance device (Bizau et al., 2011), ALS measurements (Hinojosa et al., 2012), and multi-channel R-matrix eigenphase derivative calculations (McLaughlin and Balance, 2012). In addition, analysis of the 4s24p4(1D2)nd and the 4s24p4(1S0)nd resonances is given via the SCUNC procedure. The excellent results obtained from our work point out that the SCUNC formalism may be used to confirm the results of the analysis from the standard quantum-defect expansion formulas. Eventual errors occurring in the analysis can then be automatically detected and corrected via the SCUNC procedure.
Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin
2015-03-01
MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.
Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin
2015-01-01
Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Finch, N.; Clegg, S.; Graff, T.; Morris, R. V.; Laura, J.
2017-06-01
We present a Python-based library and graphical interface for the analysis of point spectra. The tool is being developed with a focus on methods used for ChemCam data, but is flexible enough to handle spectra from other instruments.
Earthquake Intensity and Strong Motion Analysis Within SEISCOMP3
NASA Astrophysics Data System (ADS)
Becker, J.; Weber, B.; Ghasemi, H.; Cummins, P. R.; Murjaya, J.; Rudyanto, A.; Rößler, D.
2017-12-01
Measuring and predicting ground motion parameters including seismic intensities for earthquakes is crucial and subject to recent research in engineering seismology.gempa has developed the new SIGMA module for Seismic Intensity and Ground Motion Analysis. The module is based on the SeisComP3 framework extending it in the field of seismic hazard assessment and engineering seismology. SIGMA may work with or independently of SeisComP3 by supporting FDSN Web services for importing earthquake or station information and waveforms. It provides a user-friendly and modern graphical interface for semi-automatic and interactive strong motion data processing. SIGMA provides intensity and (P)SA maps based on GMPE's or recorded data. It calculates the most common strong motion parameters, e.g. PGA/PGV/PGD, Arias intensity and duration, Tp, Tm, CAV, SED and Fourier-, power- and response spectra. GMPE's are configurable. Supporting C++ and Python plug-ins, standard and customized GMPE's including the OpenQuake Hazard Library can be easily integrated and compared. Originally tailored to specifications by Geoscience Australia and BMKG (Indonesia) SIGMA has become a popular tool among SeisComP3 users concerned with seismic hazard and strong motion seismology.
NASA Astrophysics Data System (ADS)
Chen, Jianbo; Wang, Yue; Liu, Aoxue; Rong, Lixin; Wang, Jingjuan
2018-03-01
Fritillariae Bulbus, the dried bulbs of several species of the genus Fritillaria, is often used in traditional Chinese medicine for the treatment of cough and pulmonary diseases. However, the similar appearances make it difficult to identify different kinds of Fritillariae Bulbus. In this research, Fourier transform near-infrared (FT-NIR) spectroscopy with a reflection fiber probe is employed for the direct testing and automatic identification of different kinds of Fritillariae Bulbus to ensure the authenticity, efficacy and safety. The bulbs can be measured directly without pulverizing. According to the two-dimensional (2D) correlation analysis and statistical analysis, the height ratio of the two peaks near 4860 cm-1 and 4750 cm-1 in the second derivative spectra is specific to the species of Fritillariae Bulbus. This indicates that the relative amount of protein and carbohydrate may be critical to identify Fritillariae Bulbus. With the help of the SIMCA model, the four kinds of Fritillariae Bulbus can be identified correctly by FT-NIR spectroscopy. The results show the potential of FT-NIR spectroscopy with a reflection fiber probe in the rapid testing and identification of Fritillariae Bulbus.
Yang, Guang; Nawaz, Tahir; Barrick, Thomas R; Howe, Franklyn A; Slabaugh, Greg
2015-12-01
Many approaches have been considered for automatic grading of brain tumors by means of pattern recognition with magnetic resonance spectroscopy (MRS). Providing an improved technique which can assist clinicians in accurately identifying brain tumor grades is our main objective. The proposed technique, which is based on the discrete wavelet transform (DWT) of whole-spectral or subspectral information of key metabolites, combined with unsupervised learning, inspects the separability of the extracted wavelet features from the MRS signal to aid the clustering. In total, we included 134 short echo time single voxel MRS spectra (SV MRS) in our study that cover normal controls, low grade and high grade tumors. The combination of DWT-based whole-spectral or subspectral analysis and unsupervised clustering achieved an overall clustering accuracy of 94.8% and a balanced error rate of 7.8%. To the best of our knowledge, it is the first study using DWT combined with unsupervised learning to cluster brain SV MRS. Instead of dimensionality reduction on SV MRS or feature selection using model fitting, our study provides an alternative method of extracting features to obtain promising clustering results.
Methods for automatically analyzing humpback song units.
Rickwood, Peter; Taylor, Andrew
2008-03-01
This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.
NASA Astrophysics Data System (ADS)
Montes, D.; González-Peinado, R.; Tabernero, H. M.; Caballero, J. A.; Marfil, E.; Alonso-Floriano, F. J.; Cortés-Contreras, M.; González Hernández, J. I.; Klutsch, A.; Moreno-Jódar, C.
2018-05-01
We investigated almost 500 stars distributed among 193 binary or multiple systems made of late-F, G-, or early-K primaries and late-K or M dwarf companion candidates. For all of them, we compiled or measured coordinates, J-band magnitudes, spectral types, distances, and proper motions. With these data, we established a sample of 192 physically bound systems. In parallel, we carried out observations with HERMES/Mercator and obtained high-resolution spectra for the 192 primaries and five secondaries. We used these spectra and the automatic STEPAR code for deriving precise stellar atmospheric parameters: Teff, log g, ξ, and chemical abundances for 13 atomic species, including [Fe/H]. After computing Galactocentric space velocities for all the primary stars, we performed a kinematic analysis and classified them in different Galactic populations and stellar kinematic groups of very different ages, which match our own metallicity determinations and isochronal age estimations. In particular, we identified three systems in the halo and 33 systems in the young Local Association, Ursa Major and Castor moving groups, and IC 2391 and Hyades Superclusters. We finally studied the exoplanet-metallicity relation in our 193 primaries and made a list 13 M-dwarf companions with very high metallicity that can be the targets of new dedicated exoplanet surveys. All in all, our dataset will be of great help for future works on the accurate determination of metallicity of M dwarfs.
Fast Raman single bacteria identification: toward a routine in-vitro diagnostic
NASA Astrophysics Data System (ADS)
Douet, Alice; Josso, Quentin; Marchant, Adrien; Dutertre, Bertrand; Filiputti, Delphine; Novelli-Rousseau, Armelle; Espagnon, Isabelle; Kloster-Landsberg, Meike; Mallard, Frédéric; Perraut, Francois
2016-04-01
Timely microbiological results are essential to allow clinicians to optimize the prescribed treatment, ideally at the initial stage of the therapeutic process. Several approaches have been proposed to solve this issue and to provide the microbiological result in a few hours directly from the sample such as molecular biology. However fast and sensitive those methods are not based on single phenotypic information which presents several drawbacks and limitations. Optical methods have the advantage to allow single-cell sensitivity and to probe the phenotype of measured cells. Here we present a process and a prototype that allow automated single-bacteria phenotypic analysis. This prototype is based on the use of Digital In-line Holography techniques combined with a specially designed Raman spectrometer using a dedicated device to capture bacteria. The localization of single-cell is finely determined by using holograms and a proper propagation kernel. Holographic images are also used to analyze bacteria in the sample to sort potential pathogens from flora dwelling species or other biological particles. This accurate localization enables the use of a small confocal volume adapted to the measurement of single-cell. Along with the confocal volume adaptation, we also have modified every components of the spectrometer to optimize single-bacteria Raman measurements. This optimization allowed us to acquire informative single-cell spectra using an integration time of 0.5s only. Identification results obtained with this prototype are presented based on a 65144 Raman spectra database acquired automatically on 48 bacteria strains belonging to 8 species.
Spectacle and SpecViz: New Spectral Analysis and Visualization Tools
NASA Astrophysics Data System (ADS)
Earl, Nicholas; Peeples, Molly; JDADF Developers
2018-01-01
A new era of spectroscopic exploration of our universe is being ushered in with advances in instrumentation and next-generation space telescopes. The advent of new spectroscopic instruments has highlighted a pressing need for tools scientists can use to analyze and explore these new data. We have developed Spectacle, a software package for analyzing both synthetic spectra from hydrodynamic simulations as well as real COS data with an aim of characterizing the behavior of the circumgalactic medium. It allows easy reduction of spectral data and analytic line generation capabilities. Currently, the package is focused on automatic determination of absorption regions and line identification with custom line list support, simultaneous line fitting using Voigt profiles via least-squares or MCMC methods, and multi-component modeling of blended features. Non-parametric measurements, such as equivalent widths, delta v90, and full-width half-max are available. Spectacle also provides the ability to compose compound models used to generate synthetic spectra allowing the user to define various LSF kernels, uncertainties, and to specify sampling.We also present updates to the visualization tool SpecViz, developed in conjunction with the JWST data analysis tools development team, to aid in the exploration of spectral data. SpecViz is an open source, Python-based spectral 1-D interactive visualization and analysis application built around high-performance interactive plotting. It supports handling general and instrument-specific data and includes advanced tool-sets for filtering and detrending one-dimensional data, along with the ability to isolate absorption regions using slicing and manipulate spectral features via spectral arithmetic. Multi-component modeling is also possible using a flexible model fitting tool-set that supports custom models to be used with various fitting routines. It also features robust user extensions such as custom data loaders and support for user-created plugins that add new functionality.This work was supported in part by HST AR #13919, HST GO #14268, and HST AR #14560.
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Four papers are included in Part One of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project. The first paper: "Content Analysis in Information Retrieval" by S. F. Weiss presents the results of experiments aimed at determining the conditions under which content analysis improves retrieval results as well…
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
Automatic Thesaurus Generation for an Electronic Community System.
ERIC Educational Resources Information Center
Chen, Hsinchun; And Others
1995-01-01
This research reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used include term filtering, automatic indexing, and cluster analysis. The Worm Community System, used by molecular biologists studying the nematode worm C. elegans, was used as the testbed for this research.…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
A highly versatile automatized setup for quantitative measurements of PHIP enhancements
NASA Astrophysics Data System (ADS)
Kiryutin, Alexey S.; Sauer, Grit; Hadjiali, Sara; Yurkovskaya, Alexandra V.; Breitzke, Hergen; Buntkowsky, Gerd
2017-12-01
The design and application of a versatile and inexpensive experimental extension to NMR spectrometers is described that allows to carry out highly reproducible PHIP experiments directly in the NMR sample tube, i.e. under PASADENA condition, followed by the detection of the NMR spectra of hyperpolarized products with high spectral resolution. Employing this high resolution it is feasible to study kinetic processes in the solution with high accuracy. As a practical example the dissolution of hydrogen gas in the liquid and the PHIP kinetics during the hydrogenation reaction of Fmoc-O-propargyl-L-tyrosine in acetone-d6 are monitored. The timing of the setup is fully controlled by the pulse-programmer of the NMR spectrometer. By flushing with an inert gas it is possible to efficiently quench the hydrogenation reaction in a controlled fashion and to detect the relaxation of hyperpolarization without a background reaction. The proposed design makes it possible to carry out PHIP experiments in an automatic mode and reliably determine the enhancement of polarized signals.
EMPCA and Cluster Analysis of Quasar Spectra: Construction and Application to Simulated Spectra
NASA Astrophysics Data System (ADS)
Marrs, Adam; Leighly, Karen; Wagner, Cassidy; Macinnis, Francis
2017-01-01
Quasars have complex spectra with emission lines influenced by many factors. Therefore, to fully describe the spectrum requires specification of a large number of parameters, such as line equivalent width, blueshift, and ratios. Principal Component Analysis (PCA) aims to construct eigenvectors-or principal components-from the data with the goal of finding a few key parameters that can be used to predict the rest of the spectrum fairly well. Analysis of simulated quasar spectra was used to verify and justify our modified application of PCA.We used a variant of PCA called Weighted Expectation Maximization PCA (EMPCA; Bailey 2012) along with k-means cluster analysis to analyze simulated quasar spectra. Our approach combines both analytical methods to address two known problems with classical PCA. EMPCA uses weights to account for uncertainty and missing points in the spectra. K-means groups similar spectra together to address the nonlinearity of quasar spectra, specifically variance in blueshifts and widths of the emission lines.In producing and analyzing simulations, we first tested the effects of varying equivalent widths and blueshifts on the derived principal components, and explored the differences between standard PCA and EMPCA. We also tested the effects of varying signal-to-noise ratio. Next we used the results of fits to composite quasar spectra (see accompanying poster by Wagner et al.) to construct a set of realistic simulated spectra, and subjected those spectra to the EMPCA /k-means analysis. We concluded that our approach was validated when we found that the mean spectra from our k-means clusters derived from PCA projection coefficients reproduced the trends observed in the composite spectra.Furthermore, our method needed only two eigenvectors to identify both sets of correlations used to construct the simulations, as well as indicating the linear and nonlinear segments. Comparing this to regular PCA, which can require a dozen or more components, or to direct spectral analysis that may need measurement of 20 fit parameters, shows why the dual application of these two techniques is such a powerful tool.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster
NASA Astrophysics Data System (ADS)
Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song
2015-02-01
The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Arnemann, Philip-Helge; Hessler, Michael; Kampmeier, Tim; Morelli, Andrea; Van Aken, Hugo Karel; Westphal, Martin; Rehberg, Sebastian; Ertmer, Christian
2016-12-01
Life-threatening diseases of critically ill patients are known to derange microcirculation. Automatic analysis of microcirculation would provide a bedside diagnostic tool for microcirculatory disorders and allow immediate therapeutic decisions based upon microcirculation analysis. After induction of general anaesthesia and instrumentation for haemodynamic monitoring, haemorrhagic shock was induced in ten female sheep by stepwise blood withdrawal of 3 × 10 mL per kilogram body weight. Before and after the induction of haemorrhagic shock, haemodynamic variables, samples for blood gas analysis, and videos of conjunctival microcirculation were obtained by incident dark field illumination microscopy. Microcirculatory videos were analysed (1) manually with AVA software version 3.2 by an experienced user and (2) automatically by AVA software version 4.2 for total vessel density (TVD), perfused vessel density (PVD) and proportion of perfused vessels (PPV). Correlation between the two analysis methods was examined by intraclass correlation coefficient and Bland-Altman analysis. The induction of haemorrhagic shock decreased the mean arterial pressure (from 87 ± 11 to 40 ± 7 mmHg; p < 0.001); stroke volume index (from 38 ± 14 to 20 ± 5 ml·m -2 ; p = 0.001) and cardiac index (from 2.9 ± 0.9 to 1.8 ± 0.5 L·min -1 ·m -2 ; p < 0.001) and increased the heart rate (from 72 ± 9 to 87 ± 11 bpm; p < 0.001) and lactate concentration (from 0.9 ± 0.3 to 2.0 ± 0.6 mmol·L -1 ; p = 0.001). Manual analysis showed no change in TVD (17.8 ± 4.2 to 17.8 ± 3.8 mm*mm -2 ; p = 0.993), whereas PVD (from 15.6 ± 4.6 to 11.5 ± 6.5 mm*mm -2 ; p = 0.041) and PPV (from 85.9 ± 11.8 to 62.7 ± 29.6%; p = 0.017) decreased significantly. Automatic analysis was not able to identify these changes. Correlation analysis showed a poor correlation between the analysis methods and a wide spread of values in Bland-Altman analysis. As characteristic changes in microcirculation during ovine haemorrhagic shock were not detected by automatic analysis and correlation between automatic and manual analyses (current gold standard) was poor, the use of the investigated software for automatic analysis of microcirculation cannot be recommended in its current version at least in the investigated model. Further improvements in automatic vessel detection are needed before its routine use.
NASA Astrophysics Data System (ADS)
Holgado, G.; Simón-Díaz, S.; Barbá, R. H.; Puls, J.; Herrero, A.; Castro, N.; Garcia, M.; Maíz Apellániz, J.; Negueruela, I.; Sabín-Sanjulián, C.
2018-06-01
Context. The IACOB and OWN surveys are two ambitious, complementary observational projects which have made available a large multi-epoch spectroscopic database of optical high resolution spectra of Galactic massive O-type stars. Aims: Our aim is to study the full sample of (more than 350) O stars surveyed by the IACOB and OWN projects. As a first step towards this aim, we have performed the quantitative spectroscopic analysis of a subsample of 128 stars included in the modern grid of O-type standards for spectral classification. The sample comprises stars with spectral types in the range O3-O9.7 and covers all luminosity classes. Methods: We used the semi-automatized IACOB-BROAD and IACOB-GBAT/FASTWIND tools to determine the complete set of spectroscopic parameters that can be obtained from the optical spectrum of O-type stars. A quality flag was assigned to the outcome of the IACOB-GBAT/FASTWIND analysis for each star, based on a visual evaluation of how the synthetic spectrum of the best fitting FASTWIND model reproduces the observed spectrum. We also benefitted from the multi-epoch character of the IACOB and OWN surveys to perform a spectroscopic variability study of the complete sample, providing two different flags for each star accounting for spectroscopic binarity as well as variability of the main wind diagnostic lines. Results: We obtain - for the first time in a homogeneous and complete manner - the full set of spectroscopic parameters of the "anchors" of the spectral classification system in the O star domain. We provide a general overview of the stellar and wind parameters of this reference sample, as well as updated recipes for the SpT-Teff and SpT-log g calibrations for Galactic O-type stars. We also propose a distance-independent test for the wind-momentum luminosity relationship. We evaluate the reliability of our semi-automatized analysis strategy using a subsample of 40 stars extensively studied in the literature, and find a fairly good agreement between our derived effective temperatures and gravities and those obtained by means of more traditional "by-eye" techniques and different stellar atmosphere codes. The overall agreement between the synthetic spectra associated with the IACOB-GBAT/FASTWIND best fitting models and the observed spectra is good for most of the analyzed targets, but 46 stars out of the 128 present a particular behavior of the wind diagnostic lines that cannot be reproduced by our grid of spherically symmetric unclumped models. These are potential targets of interest for more detailed investigations of clumpy winds and/or the existence of additional circumstellar emitting components contaminating the wind diagnostic lines (e.g., disks, magnetospheres). Last, our spectroscopic variability study has led to the detection of clear or likely signatures of spectroscopic binarity in 27% of the stars and small amplitude radial velocity variations in the photospheric lines of another 30%. Additionally, 31% of the investigated stars show variability in the wind diagnostic lines. Tables D.1 and D.2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/613/A65
Analysis of AIS data of the Bonanza Creek Experimental Forest, Alaska
NASA Technical Reports Server (NTRS)
Spanner, M. A.; Peterson, D. L.
1986-01-01
Airborne Imaging Spectrometer (AIS) data were acquired in 1985 over the Bonanza Creek Experimental Forest, Alaska for the analysis of canopy characteristics including biochemistry. Concurrent with AIS overflights, foliage from fifteen coniferous and deciduous forest stands were analyzed for a variety of biochemical constituents including nitrogen, lignin, protein, and chlorophyll. Preliminary analysis of AIS spectra indicates that the wavelength region between 1450 to 1800 namometers (nm) displays distinct differences in spectral response for some of the forest stands. A flat field subtraction (forest stand spectra - flat field spectra) of the AIS spectra assisted in the interpretation of features of the spectra that are related to biology.
Effectiveness of an automatic tracking software in underwater motion analysis.
Magalhaes, Fabrício A; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia
2013-01-01
Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.
Automatic emotional expression analysis from eye area
NASA Astrophysics Data System (ADS)
Akkoç, Betül; Arslan, Ahmet
2015-02-01
Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Hoynck, Michael
2005-01-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Höynck, Michael
2004-12-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
Chen, Ping; Harrington, Peter B
2008-02-01
A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).
Klapsing, Philipp; Herrmann, Peter; Quintel, Michael; Moerer, Onnen
2017-12-01
Quantitative lung computed tomographic (CT) analysis yields objective data regarding lung aeration but is currently not used in clinical routine primarily because of the labor-intensive process of manual CT segmentation. Automatic lung segmentation could help to shorten processing times significantly. In this study, we assessed bias and precision of lung CT analysis using automatic segmentation compared with manual segmentation. In this monocentric clinical study, 10 mechanically ventilated patients with mild to moderate acute respiratory distress syndrome were included who had received lung CT scans at 5- and 45-mbar airway pressure during a prior study. Lung segmentations were performed both automatically using a computerized algorithm and manually. Automatic segmentation yielded similar lung volumes compared with manual segmentation with clinically minor differences both at 5 and 45 mbar. At 5 mbar, results were as follows: overdistended lung 49.58mL (manual, SD 77.37mL) and 50.41mL (automatic, SD 77.3mL), P=.028; normally aerated lung 2142.17mL (manual, SD 1131.48mL) and 2156.68mL (automatic, SD 1134.53mL), P = .1038; and poorly aerated lung 631.68mL (manual, SD 196.76mL) and 646.32mL (automatic, SD 169.63mL), P = .3794. At 45 mbar, values were as follows: overdistended lung 612.85mL (manual, SD 449.55mL) and 615.49mL (automatic, SD 451.03mL), P=.078; normally aerated lung 3890.12mL (manual, SD 1134.14mL) and 3907.65mL (automatic, SD 1133.62mL), P = .027; and poorly aerated lung 413.35mL (manual, SD 57.66mL) and 469.58mL (automatic, SD 70.14mL), P=.007. Bland-Altman analyses revealed the following mean biases and limits of agreement at 5 mbar for automatic vs manual segmentation: overdistended lung +0.848mL (±2.062mL), normally aerated +14.51mL (±49.71mL), and poorly aerated +14.64mL (±98.16mL). At 45 mbar, results were as follows: overdistended +2.639mL (±8.231mL), normally aerated 17.53mL (±41.41mL), and poorly aerated 56.23mL (±100.67mL). Automatic single CT image and whole lung segmentation were faster than manual segmentation (0.17 vs 125.35seconds [P<.0001] and 10.46 vs 7739.45seconds [P<.0001]). Automatic lung CT segmentation allows fast analysis of aerated lung regions. A reduction of processing times by more than 99% allows the use of quantitative CT at the bedside. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
Comparison of histomorphometrical data obtained with two different image analysis methods.
Ballerini, Lucia; Franke-Stenport, Victoria; Borgefors, Gunilla; Johansson, Carina B
2007-08-01
A common way to determine tissue acceptance of biomaterials is to perform histomorphometrical analysis on histologically stained sections from retrieved samples with surrounding tissue, using various methods. The "time and money consuming" methods and techniques used are often "in house standards". We address light microscopic investigations of bone tissue reactions on un-decalcified cut and ground sections of threaded implants. In order to screen sections and generate results faster, the aim of this pilot project was to compare results generated with the in-house standard visual image analysis tool (i.e., quantifications and judgements done by the naked eye) with a custom made automatic image analysis program. The histomorphometrical bone area measurements revealed no significant differences between the methods but the results of the bony contacts varied significantly. The raw results were in relative agreement, i.e., the values from the two methods were proportional to each other: low bony contact values in the visual method corresponded to low values with the automatic method. With similar resolution images and further improvements of the automatic method this difference should become insignificant. A great advantage using the new automatic image analysis method is that it is time saving--analysis time can be significantly reduced.
Stewart, Brandon D; Payne, B Keith
2008-10-01
The evidence for whether intentional control strategies can reduce automatic stereotyping is mixed. Therefore, the authors tested the utility of implementation intentions--specific plans linking a behavioral opportunity to a specific response--in reducing automatic bias. In three experiments, automatic stereotyping was reduced when participants made an intention to think specific counterstereotypical thoughts whenever they encountered a Black individual. The authors used two implicit tasks and process dissociation analysis, which allowed them to separate contributions of automatic and controlled thinking to task performance. Of importance, the reduction in stereotyping was driven by a change in automatic stereotyping and not controlled thinking. This benefit was acquired with little practice and generalized to novel faces. Thus, implementation intentions may be an effective and efficient means for controlling automatic aspects of thought.
Chu, Kuo-Jui; Chen, Po-Chun; You, Yun-Wen; Chang, Hsun-Yun; Kao, Wei-Lun; Chu, Yi-Hsuan; Wu, Chen-Yi; Shyue, Jing-Jong
2018-04-16
With its low-cost fabrication and ease of modification, paper-based analytical devices have developed rapidly in recent years. Microarrays allow automatic analysis of multiple samples or multiple reactions with minimal sample consumption. While cellulose paper is generally used, its high backgrounds in spectrometry outside of the visible range has limited its application to be mostly colorimetric analysis. In this work, glass-microfiber paper is used as the substrate for a microarray. The glass-microfiber is essentially chemically inert SiO x , and the lower background from this inorganic microfiber can avoid interference from organic analytes in various spectrometers. However, generally used wax printing fails to wet glass microfibers to form hydrophobic barriers. Therefore, to prepare the hydrophobic-hydrophilic pattern, the glass-microfiber paper was first modified with an octadecyltrichlorosilane (OTS) self-assembled monolayer (SAM) to make the paper hydrophobic. A hydrophilic microarray was then prepared using a CO 2 laser scriber that selectively removed the OTS layer with a designed pattern. One microliter of aqueous drops of peptides at various concentrations were then dispensed inside the round patterns where OTS SAM was removed while the surrounding area with OTS layer served as a barrier to separate each drop. The resulting specimen of multiple spots was automatically analyzed with a time-of-flight secondary ion mass spectrometer (ToF-SIMS), and all of the secondary ions were collected. Among the various cluster ions that have developed over the past decade, pulsed C 60 + was selected as the primary ion because of its high secondary ion intensity in the high mass region, its minimal alteration of the surface when operating within the static-limit and spatial resolution at the ∼μm level. In the resulting spectra, parent ions of various peptides (in the forms [M+H] + and [M+Na] + ) were readily identified for parallel detection of molecules in a mixture. By normalizing the ion intensity of peptides with respect to the glass-microfiber matrix ([SiOH] + ), a linear calibration curve for each peptide was generated to quantify these components in a mixture. Copyright © 2017 Elsevier B.V. All rights reserved.
A Theory of Term Importance in Automatic Text Analysis.
ERIC Educational Resources Information Center
Salton, G.; And Others
Most existing automatic content analysis and indexing techniques are based on work frequency characteristics applied largely in an ad hoc manner. Contradictory requirements arise in this connection, in that terms exhibiting high occurrence frequencies in individual documents are often useful for high recall performance (to retrieve many relevant…
Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.
Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen
2014-08-01
A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
The MATISSE analysis of large spectral datasets from the ESO Archive
NASA Astrophysics Data System (ADS)
Worley, C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Vernisse, Y.; Ordenovic, C.; Bijaoui, A.
2010-12-01
The automated stellar classification algorithm, MATISSE, has been developed at the Observatoire de la Côte d'Azur (OCA) in order to determine stellar temperatures, gravities and chemical abundances for large datasets of stellar spectra. The Gaia Data Processing and Analysis Consortium (DPAC) has selected MATISSE as one of the key programmes to be used in the analysis of the Gaia Radial Velocity Spectrometer (RVS) spectra. MATISSE is currently being used to analyse large datasets of spectra from the ESO archive with the primary goal of producing advanced data products to be made available in the ESO database via the Virtual Observatory. This is also an invaluable opportunity to identify and address issues that can be encountered with the analysis large samples of real spectra prior to the launch of Gaia in 2012. The analysis of the archived spectra of the FEROS spectrograph is currently underway and preliminary results are presented.
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
Automated Lipid A Structure Assignment from Hierarchical Tandem Mass Spectrometry Data
NASA Astrophysics Data System (ADS)
Ting, Ying S.; Shaffer, Scott A.; Jones, Jace W.; Ng, Wailap V.; Ernst, Robert K.; Goodlett, David R.
2011-05-01
Infusion-based electrospray ionization (ESI) coupled to multiple-stage tandem mass spectrometry (MS n ) is a standard methodology for investigating lipid A structural diversity (Shaffer et al. J. Am. Soc. Mass. Spectrom. 18(6), 1080-1092, 2007). Annotation of these MS n spectra, however, has remained a manual, expert-driven process. In order to keep up with the data acquisition rates of modern instruments, we devised a computational method to annotate lipid A MS n spectra rapidly and automatically, which we refer to as hierarchical tandem mass spectrometry (HiTMS) algorithm. As a first-pass tool, HiTMS aids expert interpretation of lipid A MS n data by providing the analyst with a set of candidate structures that may then be confirmed or rejected. HiTMS deciphers the signature ions (e.g., A-, Y-, and Z-type ions) and neutral losses of MS n spectra using a species-specific library based on general prior structural knowledge of the given lipid A species under investigation. Candidates are selected by calculating the correlation between theoretical and acquired MS n spectra. At a false discovery rate of less than 0.01, HiTMS correctly assigned 85% of the structures in a library of 133 manually annotated Francisella tularensis subspecies novicida lipid A structures. Additionally, HiTMS correctly assigned 85% of the structures in a smaller library of lipid A species from Yersinia pestis demonstrating that it may be used across species.
Wang, Jian; Anania, Veronica G.; Knott, Jeff; Rush, John; Lill, Jennie R.; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
The combination of chemical cross-linking and mass spectrometry has recently been shown to constitute a powerful tool for studying protein–protein interactions and elucidating the structure of large protein complexes. However, computational methods for interpreting the complex MS/MS spectra from linked peptides are still in their infancy, making the high-throughput application of this approach largely impractical. Because of the lack of large annotated datasets, most current approaches do not capture the specific fragmentation patterns of linked peptides and therefore are not optimal for the identification of cross-linked peptides. Here we propose a generic approach to address this problem and demonstrate it using disulfide-bridged peptide libraries to (i) efficiently generate large mass spectral reference data for linked peptides at a low cost and (ii) automatically train an algorithm that can efficiently and accurately identify linked peptides from MS/MS spectra. We show that using this approach we were able to identify thousands of MS/MS spectra from disulfide-bridged peptides through comparison with proteome-scale sequence databases and significantly improve the sensitivity of cross-linked peptide identification. This allowed us to identify 60% more direct pairwise interactions between the protein subunits in the 20S proteasome complex than existing tools on cross-linking studies of the proteasome complexes. The basic framework of this approach and the MS/MS reference dataset generated should be valuable resources for the future development of new tools for the identification of linked peptides. PMID:24493012
Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5–100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today’s extensive occurrence of WB. PMID:28278170
A high-resolution oxygen A-band spectrometer (HABS) and its radiation closure
NASA Astrophysics Data System (ADS)
Min, Q.; Yin, B.; Li, S.; Berndt, J.; Harrison, L.; Joseph, E.; Duan, M.; Kiedron, P.
2014-02-01
The pressure dependence of oxygen A-band absorption enables the retrieval of the vertical profiles of aerosol and cloud properties from oxygen A-band spectrometry. To improve the understanding of oxygen A-band inversions and utility, we developed a high-resolution oxygen A-band spectrometer (HABS), and deployed it at Howard University Beltsville site during the NASA Discover Air-Quality Field Campaign in July 2011. The HABS has the ability to measure solar direct-beam and zenith diffuse radiation through a telescope automatically. It exhibits excellent performance: stable spectral response ratio, high signal-to-noise ratio (SNR), high spectrum resolution (0.16 nm), and high Out-of-Band Rejection (10-5). To evaluate the spectra performance of HABS, a HABS simulator has been developed by combing the discrete ordinates radiative transfer (DISORT) code with the High Resolution Transmission (HTRAN) database HITRAN2008. The simulator uses double-k approach to reduce the computational cost. The HABS measured spectra are consistent with the related simulated spectra. For direct-beam spectra, the confidence intervals (95%) of relative difference between measurements and simulation are (-0.06, 0.05) and (-0.08, 0.09) for solar zenith angles of 27° and 72°, respectively. The main differences between them occur at or near the strong oxygen absorption line centers. They are mainly caused by the noise/spikes of HABS measured spectra, as a result of combined effects of weak signal, low SNR, and errors in wavelength registration and absorption line parameters. The high-resolution oxygen A-band measurements from HABS can constrain the active radar retrievals for more accurate cloud optical properties, particularly for multi-layer clouds and for mixed-phase clouds.
ERIC Educational Resources Information Center
Wang, Lihua
2012-01-01
A new method is introduced for teaching group theory analysis of the infrared spectra of organometallic compounds using molecular modeling. The main focus of this method is to enhance student understanding of the symmetry properties of vibrational modes and of the group theory analysis of infrared (IR) spectra by using visual aids provided by…
Spectral analysis of the structure of ultradispersed diamonds
NASA Astrophysics Data System (ADS)
Uglov, V. V.; Shimanski, V. I.; Rusalsky, D. P.; Samtsov, M. P.
2008-07-01
The structure of ultradispersed diamonds (UDD) is studied by spectral methods. The presence of diamond crystal phase in the UDD is found based on x-ray analysis and Raman spectra. The Raman spectra also show sp2-and sp3-hybridized carbon. Analysis of IR absorption spectra suggests that the composition of functional groups present in the particles changes during the treatment.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Yu, Marcia M L; Sandercock, P Mark L
2012-01-01
During the forensic examination of textile fibers, fibers are usually mounted on glass slides for visual inspection and identification under the microscope. One method that has the capability to accurately identify single textile fibers without subsequent demounting is Raman microspectroscopy. The effect of the mountant Entellan New on the Raman spectra of fibers was investigated to determine if it is suitable for fiber analysis. Raman spectra of synthetic fibers mounted in three different ways were collected and subjected to multivariate analysis. Principal component analysis score plots revealed that while spectra from different fiber classes formed distinct groups, fibers of the same class formed a single group regardless of the mounting method. The spectra of bare fibers and those mounted in Entellan New were found to be statistically indistinguishable by analysis of variance calculations. These results demonstrate that fibers mounted in Entellan New may be identified directly by Raman microspectroscopy without further sample preparation. © 2011 American Academy of Forensic Sciences.
ImatraNMR: Novel software for batch integration and analysis of quantitative NMR spectra
NASA Astrophysics Data System (ADS)
Mäkelä, A. V.; Heikkilä, O.; Kilpeläinen, I.; Heikkinen, S.
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D 1H and 13C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request.
ERIC Educational Resources Information Center
Ma, Dongmei; Yu, Xiaoru; Zhang, Haomin
2017-01-01
The present study aimed to investigate second language (L2) word-level and sentence-level automatic processing among English as a foreign language students through a comparative analysis of students with different proficiency levels. As a multidimensional and dynamic construct, automaticity is conceptualized as processing speed, stability, and…
NASA Astrophysics Data System (ADS)
Zhu, Ying; Tan, Tuck Lee
2016-04-01
An effective and simple analytical method using Fourier transform infrared (FTIR) spectroscopy to distinguish wild-grown high-quality Ganoderma lucidum (G. lucidum) from cultivated one is of essential importance for its quality assurance and medicinal value estimation. Commonly used chemical and analytical methods using full spectrum are not so effective for the detection and interpretation due to the complex system of the herbal medicine. In this study, two penalized discriminant analysis models, penalized linear discriminant analysis (PLDA) and elastic net (Elnet),using FTIR spectroscopy have been explored for the purpose of discrimination and interpretation. The classification performances of the two penalized models have been compared with two widely used multivariate methods, principal component discriminant analysis (PCDA) and partial least squares discriminant analysis (PLSDA). The Elnet model involving a combination of L1 and L2 norm penalties enabled an automatic selection of a small number of informative spectral absorption bands and gave an excellent classification accuracy of 99% for discrimination between spectra of wild-grown and cultivated G. lucidum. Its classification performance was superior to that of the PLDA model in a pure L1 setting and outperformed the PCDA and PLSDA models using full wavelength. The well-performed selection of informative spectral features leads to substantial reduction in model complexity and improvement of classification accuracy, and it is particularly helpful for the quantitative interpretations of the major chemical constituents of G. lucidum regarding its anti-cancer effects.
Approaches to the automatic generation and control of finite element meshes
NASA Technical Reports Server (NTRS)
Shephard, Mark S.
1987-01-01
The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro
2015-01-01
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768
AUTOBA: automation of backbone assignment from HN(C)N suite of experiments.
Borkar, Aditi; Kumar, Dinesh; Hosur, Ramakrishna V
2011-07-01
Development of efficient strategies and automation represent important milestones of progress in rapid structure determination efforts in proteomics research. In this context, we present here an efficient algorithm named as AUTOBA (Automatic Backbone Assignment) designed to automate the assignment protocol based on HN(C)N suite of experiments. Depending upon the spectral dispersion, the user can record 2D or 3D versions of the experiments for assignment. The algorithm uses as inputs: (i) protein primary sequence and (ii) peak-lists from user defined HN(C)N suite of experiments. In the end, one gets H(N), (15)N, C(α) and C' assignments (in common BMRB format) for the individual residues along the polypeptide chain. The success of the algorithm has been demonstrated, not only with experimental spectra recorded on two small globular proteins: ubiquitin (76 aa) and M-crystallin (85 aa), but also with simulated spectra of 27 other proteins using assignment data from the BMRB.
AUTONOMOUS GAUSSIAN DECOMPOSITION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less
Trends of Science Education Research: An Automatic Content Analysis
ERIC Educational Resources Information Center
Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien
2010-01-01
This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of "International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education" from 1990 to 2007. The…
ERIC Educational Resources Information Center
Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.
1998-01-01
Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…
Automatic Online Lecture Highlighting Based on Multimedia Analysis
ERIC Educational Resources Information Center
Che, Xiaoyin; Yang, Haojin; Meinel, Christoph
2018-01-01
Textbook highlighting is widely considered to be beneficial for students. In this paper, we propose a comprehensive solution to highlight the online lecture videos in both sentence- and segment-level, just as is done with paper books. The solution is based on automatic analysis of multimedia lecture materials, such as speeches, transcripts, and…
Automatic Text Analysis Based on Transition Phenomena of Word Occurrences
ERIC Educational Resources Information Center
Pao, Miranda Lee
1978-01-01
Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)
Automatic Coding of Dialogue Acts in Collaboration Protocols
ERIC Educational Resources Information Center
Erkens, Gijsbert; Janssen, Jeroen
2008-01-01
Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…
The Effect of Chain Length on Mid-Infrared and Near-Infrared Spectra of Aliphatic 1-Alcohols.
Kwaśniewicz, Michał; Czarnecki, Mirosław A
2018-02-01
Effect of the chain length on mid-infrared (MIR) and near-infrared (NIR) spectra of aliphatic 1-alcohols from methanol to 1-decanol was examined in detail. Of particular interest were the spectra-structure correlations in the NIR region and the correlation between MIR and NIR spectra of 1-alcohols. An application of two-dimensional correlation analysis (2D-COS) and chemometric methods provided comprehensive information on spectral changes in the data set. Principal component analysis (PCA) and cluster analysis evidenced that the spectra of methanol, ethanol, and 1-propanol are noticeably different from the spectra of higher 1-alcohols. The similarity between the spectra increases with an increase in the chain length. Hence, the most similar are the spectra of 1-nonanol and 1-decanol. Two-dimensional hetero-correlation analysis is very helpful for identification of the origin of bands and may guide selection of the best spectral ranges for the chemometric analysis. As shown, normalization of the spectra pronounces the intensity changes in various spectral regions and provides information not accessible from the raw data. The spectra of alcohols cannot be represented as a sum of the CH 3 , CH 2 , and OH group spectra since the OH group is involved in the hydrogen bonding. As a result, the spectral changes of this group are nonlinear and its spectral profile cannot be properly resolved. Finally, this work provides a lot of evidence that the degree of self-association of 1-alcohols decreases with the increase in chain length because of the growing meaning of the hydrophobic interactions. For butyl alcohol and higher 1-alcohols the hydrophobic interactions are more important than the OH OH interactions. Therefore, methanol, ethanol, and 1-propanol have unlimited miscibility with water, whereas 1-butanol and higher 1-alcohols have limited miscibility with water.
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
A phase and frequency alignment protocol for 1H MRSI data of the prostate.
Wright, Alan J; Buydens, Lutgarde M C; Heerschap, Arend
2012-05-01
(1)H MRSI of the prostate reveals relative metabolite levels that vary according to the presence or absence of tumour, providing a sensitive method for the identification of patients with cancer. Current interpretations of prostate data rely on quantification algorithms that fit model metabolite resonances to individual voxel spectra and calculate relative levels of metabolites, such as choline, creatine, citrate and polyamines. Statistical pattern recognition techniques can potentially improve the detection of prostate cancer, but these analyses are hampered by artefacts and sources of noise in the data, such as variations in phase and frequency of resonances. Phase and frequency variations may arise as a result of spatial field gradients or local physiological conditions affecting the frequency of resonances, in particular those of citrate. Thus, there are unique challenges in developing a peak alignment algorithm for these data. We have developed a frequency and phase correction algorithm for automatic alignment of the resonances in prostate MRSI spectra. We demonstrate, with a simulated dataset, that alignment can be achieved to a phase standard deviation of 0.095 rad and a frequency standard deviation of 0.68 Hz for the citrate resonances. Three parameters were used to assess the improvement in peak alignment in the MRSI data of five patients: the percentage of variance in all MRSI spectra explained by their first principal component; the signal-to-noise ratio of a spectrum formed by taking the median value of the entire set at each spectral point; and the mean cross-correlation between all pairs of spectra. These parameters showed a greater similarity between spectra in all five datasets and the simulated data, demonstrating improved alignment for phase and frequency in these spectra. This peak alignment program is expected to improve pattern recognition significantly, enabling accurate detection and localisation of prostate cancer with MRSI. Copyright © 2011 John Wiley & Sons, Ltd.
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
Umari, P; Pasquarello, Alfredo
2005-09-23
We determine the fraction f of B atoms belonging to boroxol rings in vitreous boron oxide through a first-principles analysis. After generating a model structure of vitreous B2O3 by first-principles molecular dynamics, we address a large set of properties, including the neutron structure factor, the neutron density of vibrational states, the infrared spectra, the Raman spectra, and the 11B NMR spectra, and find overall good agreement with corresponding experimental data. From the analysis of Raman and 11B NMR spectra, we yield consistently for both probes a fraction f of approximately 0.75. This result indicates that the structure of vitreous boron oxide is largely dominated by boroxol rings.
Hara, Risa; Ishigaki, Mika; Kitahama, Yasutaka; Ozaki, Yukihiro; Genkawa, Takuma
2018-08-30
The difference in Raman spectra for different excitation wavelengths (532 nm, 785 nm, and 1064 nm) was investigated to identify an appropriate wavelength for the quantitative analysis of carotenoids in tomatoes. For the 532 nm-excited Raman spectra, the intensity of the peak assigned to the carotenoid has no correlation with carotenoid concentration, and the peak shift reflects carotenoid composition changing from lycopene to β-carotene and lutein. Thus, 532 nm-excited Raman spectra are useful for the qualitative analysis of carotenoids. For the 785 nm- and 1064 nm-excited Raman spectra, the peak intensity of the carotenoid showed good correlation with carotenoid concentration; thus, regression models for carotenoid concentration were developed using these Raman spectra and partial least squares regression. A regression model designed using the 785 nm-excited Raman spectra showed a better result than the 532 nm- and 1064 nm-excited Raman spectra. Therefore, it can be concluded that 785 nm is the most suitable excitation wavelength for the quantitative analysis of carotenoid concentration in tomatoes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Automatic Clustering Using FSDE-Forced Strategy Differential Evolution
NASA Astrophysics Data System (ADS)
Yasid, A.
2018-01-01
Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.
Unassigned MS/MS Spectra: Who Am I?
Pathan, Mohashin; Samuel, Monisha; Keerthikumar, Shivakumar; Mathivanan, Suresh
2017-01-01
Recent advances in high resolution tandem mass spectrometry (MS) has resulted in the accumulation of high quality data. Paralleled with these advances in instrumentation, bioinformatics software have been developed to analyze such quality datasets. In spite of these advances, data analysis in mass spectrometry still remains critical for protein identification. In addition, the complexity of the generated MS/MS spectra, unpredictable nature of peptide fragmentation, sequence annotation errors, and posttranslational modifications has impeded the protein identification process. In a typical MS data analysis, about 60 % of the MS/MS spectra remains unassigned. While some of these could attribute to the low quality of the MS/MS spectra, a proportion can be classified as high quality. Further analysis may reveal how much of the unassigned MS spectra attribute to search space, sequence annotation errors, mutations, and/or posttranslational modifications. In this chapter, the tools used to identify proteins and ways to assign unassigned tandem MS spectra are discussed.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
ERIC Educational Resources Information Center
Hamade, Rachel; Hewlett, Nigel; Scanlon, Emer
2006-01-01
This study aimed to evaluate a new automatic tracheostoma valve: the Provox FreeHands HME (manufactured by Atos Medical AB, Sweden). Data from four laryngectomee participants using automatic and also manual occlusion were subjected to acoustic and perceptual analysis. The main results were a significant decrease, from the manual to automatic…
Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I
2010-11-19
Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Peng; Luo, Ali; Li, Yinbi
2014-05-01
The LAMOST spectral analysis pipeline, called the 1D pipeline, aims to classify and measure the spectra observed in the LAMOST survey. Through this pipeline, the observed stellar spectra are classified into different subclasses by matching with template spectra. Consequently, the performance of the stellar classification greatly depends on the quality of the template spectra. In this paper, we construct a new LAMOST stellar spectral classification template library, which is supposed to improve the precision and credibility of the present LAMOST stellar classification. About one million spectra are selected from LAMOST Data Release One to construct the new stellar templates, andmore » they are gathered in 233 groups by two criteria: (1) pseudo g – r colors obtained by convolving the LAMOST spectra with the Sloan Digital Sky Survey ugriz filter response curve, and (2) the stellar subclass given by the LAMOST pipeline. In each group, the template spectra are constructed using three steps. (1) Outliers are excluded using the Local Outlier Probabilities algorithm, and then the principal component analysis method is applied to the remaining spectra of each group. About 5% of the one million spectra are ruled out as outliers. (2) All remaining spectra are reconstructed using the first principal components of each group. (3) The weighted average spectrum is used as the template spectrum in each group. Using the previous 3 steps, we initially obtain 216 stellar template spectra. We visually inspect all template spectra, and 29 spectra are abandoned due to low spectral quality. Furthermore, the MK classification for the remaining 187 template spectra is manually determined by comparing with 3 template libraries. Meanwhile, 10 template spectra whose subclass is difficult to determine are abandoned. Finally, we obtain a new template library containing 183 LAMOST template spectra with 61 different MK classes by combining it with the current library.« less
Hofmann, Matthias J.; Koelsch, Patrick
2015-01-01
Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297
Control of the TSU 2-m automatic telescope
NASA Astrophysics Data System (ADS)
Eaton, Joel A.; Williamson, Michael H.
2004-09-01
Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.
Automated Selection of Metal-Poor Stars in the Galaxy
NASA Astrophysics Data System (ADS)
Rhee, Jaehyon
2000-08-01
In this thesis I have developed algorithms for the efficient reduction and analysis of a large set of objective-prism data, and for the reliable selection of extremely metal-poor candidate stars in the Galaxy. Automated computer scans of the 308 photographic plates in the HK objective-prism / interference-filter survey of Beers and colleagues have been carried out with the Automatic Plate Measuring (APM) machine in Cambridge, England. Highly automated software tools have been developed in order to identify useful spectra and remove unusable spectra, to locate the positions of the Ca II H (3969 Å) and K (3933 Å) absorption lines, and to construct approximate continua. Equivalent widths of the Ca II H and K lines were then measured directly from these reduced spectra. A subset of 294,039 spectra from 87 of the HK survey plates (located within approximately 30 degrees of the South Galactic Pole) were extracted. Of these, 221,670 (75.4%) proved to be useful for subsequent analysis. I have explored new methodology, making use of an Artificial Neural Network (ANN) analysis approach, in order to select extremely metal-poor star candidates with high efficiency. The ANNs were trained to predict metallicity, [Fe/H], and to classify stars into 6 groups separated by temperature and metal abundance, based on two accurately measured parameters -- the de-reddened broadband (B-V)0 color for known HK survey stars with available photometric information, and the equivalent width of the Ca II K line in an 18 Å band, the K18 index, as measured from follow-up medium-resolution spectroscopy taken during the course of the HK survey. When provided with accurate input data, the trained networks were able to estimate [Fe/H] and to determine the class with high accuracy (with a robust estimated one-sigma scatter of SBI = 0.13 dex, and an overall correction rate of 91%). The ANN approach was then used in order to recover information on the K18 index and (B-V)0 color directly from the APM-extracted spectra. Trained networks fed with known colors, measured peak fluxes, and the raw fluxes of the low-resolution digital spectra were able to predict the K18 index with a one-sigma scatter in the range 1.2 < SBI < 1.4 Å, depending on the color and strength of the line. By feeding on calibrated, multiple-band, photographic measurements of apparent magnitudes, peak fluxes, and the fluxes of estimated continua of the extracted APM spectra, the trained networks were able to estimate (B-V)0 colors with a scatter in the range 0.13 < SBI < 0.16 magnitudes. From an application of the ANN approach, using the less accurate information obtained from the calibrated estimates of K18 and (B-V)0 colors, it still proved possible to obtain metal abundance estimates with a scatter of SBI = 0.78 dex, and to carry out classifications with an overall correction rate of 40%. By comparison with a large sample of known metal-poor stars, on the order of 60% of the candidates predicted to have a metallicity [Fe/H] < -2.0 indeed fell in this region of abundance (representing a three-fold improvement over the visual selection criteria previously employed in the HK survey). The recovery rate indicated that at least 30% of all such stars in our sample would be identified in a blind sampling, limited, for the most part, by the lack of accurate color information. Finally we report 481 extremely metal-poor star candidates in 10 plates of the HK survey, selected by our newly developed methodology.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
ERIC Educational Resources Information Center
Okurut, Jeje Moses
2018-01-01
The impact of automatic promotion practice on students dropping out of Uganda's primary education was assessed using propensity score in difference in differences analysis technique. The analysis strategy was instrumental in addressing the selection bias problem, as well as biases arising from common trends over time, and permanent latent…
ImatraNMR: novel software for batch integration and analysis of quantitative NMR spectra.
Mäkelä, A V; Heikkilä, O; Kilpeläinen, I; Heikkinen, S
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D (1)H and (13)C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request. Copyright © 2011 Elsevier Inc. All rights reserved.
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
NASA Astrophysics Data System (ADS)
Martin, T.; Drissen, L.; Joncas, G.
2015-09-01
SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.
Automated biodosimetry using digital image analysis of fluorescence in situ hybridization specimens.
Castleman, K R; Schulze, M; Wu, Q
1997-11-01
Fluorescence in situ hybridization (FISH) of metaphase chromosome spreads is valuable for monitoring the radiation dose to circulating lymphocytes. At low dose levels, the number of cells that must be examined to estimate aberration frequencies is quite large. An automated microscope that can perform this analysis autonomously on suitably prepared specimens promises to make practical the large-scale studies that will be required for biodosimetry in the future. This paper describes such an instrument that is currently under development. We use metaphase specimens in which the five largest chromosomes have been hybridized with different-colored whole-chromosome painting probes. An automated multiband fluorescence microscope locates the spreads and counts the number of chromosome components of each color. Digital image analysis is used to locate and isolate the cells, count chromosome components, and estimate the proportions of abnormal cells. Cells exhibiting more than two chromosomal fragments in any color correspond to a clastogenic event. These automatically derived counts are corrected for statistical bias and used to estimate the overall rate of chromosome breakage. Overlap of fluorophore emission spectra prohibits isolation of the different chromosomes into separate color channels. Image processing effectively isolates each fluorophore to a single monochrome image, simplifying the task of counting chromosome fragments and reducing the error in the algorithm. Using proportion estimation, we remove the bias introduced by counting errors, leaving accuracy restricted by sample size considerations alone.
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
SHARD - a SeisComP3 module for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Weber, B.; Becker, J.; Ellguth, E.; Henneberger, R.; Herrnkind, S.; Roessler, D.
2016-12-01
Monitoring building and structure response to strong earthquake ground shaking or human-induced vibrations in real-time forms the backbone of modern structural health monitoring (SHM). The continuous data transmission, processing and analysis reduces drastically the time decision makers need to plan for appropriate response to possible damages of high-priority buildings and structures. SHARD is a web browser based module using the SeisComp3 framework to monitor the structural health of buildings and other structures by calculating standard engineering seismology parameters and checking their exceedance in real-time. Thresholds can be defined, e.g. compliant with national building codes (IBC2000, DIN4149 or EC8), for PGA/PGV/PGD, response spectra and drift ratios. In case thresholds are exceeded automatic or operator driven reports are generated and send to the decision makers. SHARD also determines waveform quality in terms of data delay and variance to report sensor status. SHARD is the perfect tool for civil protection to monitor simultaneously multiple city-wide critical infrastructure as hospitals, schools, governmental buildings and structures as bridges, dams and power substations.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-06-21
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.
NASA Astrophysics Data System (ADS)
Díaz-Ayil, Gilberto; Amouroux, Marine; Clanché, Fabien; Granjon, Yves; Blondel, Walter C. P. M.
2009-07-01
Spatially-resolved bimodal spectroscopy (multiple AutoFluorescence AF excitation and Diffuse Reflectance DR), was used in vivo to discriminate various healthy and precancerous skin stages in a pre-clinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A specific data preprocessing scheme was applied to intensity spectra (filtering, spectral correction and intensity normalization), and several sets of spectral characteristics were automatically extracted and selected based on their discrimination power, statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of Sensibility (Se) and Specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibres distances and of the numbers of principal components, such that: Se and Sp ~ 100% when discriminating CH vs. others; Sp ~ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ~ 74% and Se ~ 63% for AH vs. D.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-01-01
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features.
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features
NASA Astrophysics Data System (ADS)
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2015-01-01
The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Seamless presentation capture, indexing, and management
NASA Astrophysics Data System (ADS)
Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel
2005-10-01
Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
NASA Technical Reports Server (NTRS)
Shook, D. F.; Pierce, C. R.
1972-01-01
Proton recoil distributions were obtained by using organic liquid scintillators of different size. The measured distributions are converted to neutron spectra by differentiation analysis for comparison to the unfolded spectra of the largest scintillator. The approximations involved in the differentiation analysis are indicated to have small effects on the precision of neutron spectra measured with the smaller scintillators but introduce significant error for the largest scintillator. In the case of the smallest cylindrical scintillator, nominally 1.2 by 1.3 cm, the efficiency is shown to be insensitive to multiple scattering and to the angular distribution to the incident flux. These characteristics of the smaller scintillator make possible its use to measure scalar flux spectra within media high efficiency is not required.
Sorg, Nadine; Poppe, Carolin; Bunos, Milica; Wingenfeld, Eva; Hümmer, Christiane; Krämer, Ariane; Stock, Belinda; Seifried, Erhard; Bonig, Halvard
2015-06-01
Red blood cell (RBC) depletion is a standard technique for preparation of ABO-incompatible bone marrow transplants (BMTs). Density centrifugation or apheresis are used successfully at clinical scale. The advent of a bone marrow (BM) processing module for the Spectra Optia (Terumo BCT) provided the initiative to formally compare our standard technology, the COBE2991 (Ficoll, manual, "C") with the Spectra Optia BMP (apheresis, semiautomatic, "O"), the Sepax II NeatCell (Ficoll, automatic, "S"), the Miltenyi CliniMACS Prodigy density gradient separation system (Ficoll, automatic, "P"), and manual Ficoll ("M"). C and O handle larger product volumes than S, P, and M. Technologies were assessed for RBC depletion, target cell (mononuclear cells [MNCs] for buffy coats [BCs], CD34+ cells for BM) recovery, and cost/labor. BC pools were simultaneously purged with C, O, S, and P; five to 18 BM samples were sequentially processed with C, O, S, and M. Mean RBC removal with C was 97% (BCs) or 92% (BM). From both products, O removed 97%, and P, S, and M removed 99% of RBCs. MNC recovery from BC (98% C, 97% O, 65% P, 74% S) or CD34+ cell recovery from BM (92% C, 90% O, 67% S, 70% M) were best with C and O. Polymorphonuclear cells (PMNs) were depleted from BCs by P, S, and C, while O recovered 50% of PMNs. Time savings compared to C or M for all tested technologies are considerable. All methods are in principle suitable and can be selected based on sample volume, available technology, and desired product specifications beyond RBC depletion and MNC and/or CD34+ cell recovery. © 2015 AABB.
Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong
2016-01-01
Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.
NASA Technical Reports Server (NTRS)
Goldman, A.; Murcray, F. J.; Rinsland, C. P.; Blatherwick, R. D.; Murcray, F. H.; Murcray, D. G.
1991-01-01
Results of ongoing studies of high-resolution solar absorption spectra aimed at the identification and quantification of trace constituents of importance in the chemistry of the stratosphere and upper troposphere are presented. An analysis of balloon-borne and ground-based spectra obtained at 0.0025/cm covering the 700-2200/cm interval is presented. The 0.0025/cm spectra, along with corresponding laboratory spectra, improves the spectral line parameters, and thus the accuracy of quantifying trace constituents. Results for COF2, F22, SF6, and other species are presented. The retrieval methods used for total column density and altitude distribution for both ground-based and balloon-borne spectra are also discussed.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-11-30
The PeakWorks software is designed to assist in the quantitative analysis of atom probe tomography (APT) generated mass spectra. Specifically, through an interactive user interface, mass peaks can be identified automatically (defined by a threshold) and/or identified manually. The software then provides a means to assign specific elemental isotopes (including more than one) to each peak. The software also provides a means for the user to choose background subtraction of each peak based on background fitting functions, the choice of which is left to the users discretion. Peak ranging (the mass range over which peaks are integrated) is also automatedmore » allowing the user to chose a quantitative range (e.g. full-widthhalf- maximum). The software then integrates all identified peaks, providing a background-subtracted composition, which also includes the deconvolution of peaks (i.e. those peaks that happen to have overlapping isotopic masses). The software is also able to output a 'range file' that can be used in other software packages, such as within IVAS. A range file lists the peak identities, the mass range of each identified peak, and a color code for the peak. The software is also able to generate 'dummy' peak ranges within an outputted range file that can be used within IVAS to provide a means for background subtracted proximity histogram analysis.« less
Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score
Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly
2013-01-01
Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
[Ultrastructure and Raman Spectral Characteristics of Two Kinds of Acute Myeloid Leukemia Cells].
Liang, Hao-Yue; Cheng, Xue-Lian; Dong, Shu-Xu; Zhao, Shi-Xuan; Wang, Ying; Ru, Yong-Xin
2018-02-01
To investigate the Raman spectral characteristics of leukemia cells from 4 patients with acute promyelocytic leukemia (M 3 ) and 3 patients with acute monoblastic leukemia (M 5 ), establish a novel Raman label-free method to distinguish 2 kinds of acute myeloid leukemia cells so as to provide basis for clinical research. Leukemia cells were collected from bone marrow of above-mentioned patients. Raman spectra were acquired by Horiba Xplora Raman spectrometer and Raman spectra of 30-50 cells from each patient were recorded. The diagnostic model was established according to principle component analysis (PCA), discriminant function analysis (DFA) and cluster analysis, and the spectra of leukemia cells from 7 patients were analyzed and classified. Characteristics of Raman spectra were analyzed combining with ultrastructure of leukemia cells. There were significant differences between Raman spectra of 2 kinds of leukemia cells. Compared with acute monoblastic leukemia cells, the spectra of acute promyelocytic leukemia cells showed stronger peaks in 622, 643, 757, 852, 1003, 1033, 1117, 1157, 1173, 1208, 1340, 1551, 1581 cm -1 . The diagnostic models established by PCA-DFA and cluster analysis could successfully classify these Raman spectra of different samples with a high accuracy of 100% (233/233). The model was evaluated by "Leave-one-out" cross-validation and reached a high accuracy of 97% (226/233). The level of macromolecules of M 3 cells is higher than that of M 5 . The diagnostic models established by PCA-DFA can classify these Raman spectra of different cells with a high accuracy. Raman spectra shows consistent result with ultrastructure by TEM.
High-throughput microcoil NMR of compound libraries using zero-dispersion segmented flow analysis.
Kautz, Roger A; Goetzinger, Wolfgang K; Karger, Barry L
2005-01-01
An automated system for loading samples into a microcoil NMR probe has been developed using segmented flow analysis. This approach enhanced 2-fold the throughput of the published direct injection and flow injection methods, improved sample utilization 3-fold, and was applicable to high-field NMR facilities with long transfer lines between the sample handler and NMR magnet. Sample volumes of 2 microL (10-30 mM, approximately 10 microg) were drawn from a 96-well microtiter plate by a sample handler, then pumped to a 0.5-microL microcoil NMR probe as a queue of closely spaced "plugs" separated by an immiscible fluorocarbon fluid. Individual sample plugs were detected by their NMR signal and automatically positioned for stopped-flow data acquisition. The sample in the NMR coil could be changed within 35 s by advancing the queue. The fluorocarbon liquid wetted the wall of the Teflon transfer line, preventing the DMSO samples from contacting the capillary wall and thus reducing sample losses to below 5% after passage through the 3-m transfer line. With a wash plug of solvent between samples, sample-to-sample carryover was <1%. Significantly, the samples did not disperse into the carrier liquid during loading or during acquisitions of several days for trace analysis. For automated high-throughput analysis using a 16-second acquisition time, spectra were recorded at a rate of 1.5 min/sample and total deuterated solvent consumption was <0.5 mL (1 US dollar) per 96-well plate.
Analysis and Comparison of Some Automatic Vehicle Monitoring Systems
DOT National Transportation Integrated Search
1973-07-01
In 1970 UMTA solicited proposals and selected four companies to develop systems to demonstrate the feasibility of different automatic vehicle monitoring techniques. The demonstrations culminated in experiments in Philadelphia to assess the performanc...
Hanson, M.L.; Tabor, C.D. Jr.
1961-12-01
A mass spectrometer for analyzing the components of a gas is designed which is capable of continuous automatic operation such as analysis of samples of process gas from a continuous production system where the gas content may be changing. (AEC)
NASA Astrophysics Data System (ADS)
Mármol-Queraltó, E.; Sánchez, S. F.; Marino, R. A.; Mast, D.; Viironen, K.; Gil de Paz, A.; Iglesias-Páramo, J.; Rosales-Ortega, F. F.; Vilchez, J. M.
2011-10-01
Aims: Integral field spectroscopy (IFS) is a powerful approach to studying nearby galaxies since it enables a detailed analysis of their resolved physical properties. Here we present our study of a sample of nearby galaxies selected to exploit the two-dimensional information provided by the IFS. Methods: We observed a sample of 48 galaxies from the local universe with the PPaK integral field spectroscopy unit (IFU), of the PMAS spectrograph, mounted at the 3.5 m telescope at Calar Alto Observatory (Almeria, Spain). Two different setups were used during these studies (low - V300 - and medium - V600 - resolution mode) covering a spectral range of around 3700-7000 ÅÅ. We developed a full automatic pipeline for the data reduction, which includes an analysis of the quality of the final data products. We applied a decoupling method to obtain the ionised gas and stellar content of these galaxies, and derive the main physical properties of the galaxies. To assess the accuracy in the measurements of the different parameters, we performed a set of simulations to derive the expected relative errors obtained with these data. In addition, we extracted spectra for two types of aperture, one central and another integrated over the entire galaxy, from the datacubes. The main properties of the stellar populations and ionised gas of these galaxies and an estimate of their relative errors are derived from those spectra, as well as from the whole datacubes. Results: We compare the central spectrum extracted from our datacubes and the SDSS spectrum for each of the galaxies for which this is possible, and find close agreement between the derived values for both samples. We find differences on the properties of galaxies when comparing a central and an integrated spectra, showing the effects of the extracted aperture on the interpretation of the data. Finally, we present two-dimensional maps of some of the main properties derived with the decoupling procedure. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
Toward automatic finite element analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Perucchio, Renato; Voelcker, Herbert
1987-01-01
Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Akhtar, Naveed; Mian, Ajmal
2017-10-03
We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.
Statistical properties of Fermi GBM GRBs' spectra
NASA Astrophysics Data System (ADS)
Rácz, István I.; Balázs, Lajos G.; Horvath, Istvan; Tóth, L. Viktor; Bagoly, Zsolt
2018-03-01
Statistical studies of gamma-ray burst (GRB) spectra may result in important information on the physics of GRBs. The Fermi GBM catalogue contains GRB parameters (peak energy, spectral indices, and intensity) estimated fitting the gamma-ray spectral energy distribution of the total emission (fluence, flnc), and during the time of the peak flux (pflx). Using contingency tables, we studied the relationship of the models best-fitting pflx and flnc time intervals. Our analysis revealed an ordering of the spectra into a power law - Comptonized - smoothly broken power law - Band series. This result was further supported by a correspondence analysis of the pflx and flnc spectra categorical variables. We performed a linear discriminant analysis (LDA) to find a relationship between categorical (spectral) and model independent physical data. LDA resulted in highly significant physical differences among the spectral types, that is more pronounced in the case of the pflx spectra, than for the flnc spectra. We interpreted this difference as caused by the temporal variation of the spectrum during the outburst. This spectral variability is confirmed by the differences in the low-energy spectral index and peak energy, between the pflx and flnc spectra. We found that the synchrotron radiation is significant in GBM spectra. The mean low-energy spectral index is close to the canonical value of α = -2/3 during the peak flux. However, α is ˜ -0.9 for the spectra of the fluences. We interpret this difference as showing that the effect of cooling is important only for the fluence spectra.
Kokaly, Raymond F.
2011-01-01
This report describes procedures for installing and using the U.S. Geological Survey Processing Routines in IDL for Spectroscopic Measurements (PRISM) software. PRISM provides a framework to conduct spectroscopic analysis of measurements made using laboratory, field, airborne, and space-based spectrometers. Using PRISM functions, the user can compare the spectra of materials of unknown composition with reference spectra of known materials. This spectroscopic analysis allows the composition of the material to be identified and characterized. Among its other functions, PRISM contains routines for the storage of spectra in database files, import/export of ENVI spectral libraries, importation of field spectra, correction of spectra to absolute reflectance, arithmetic operations on spectra, interactive continuum removal and comparison of spectral features, correction of imaging spectrometer data to ground-calibrated reflectance, and identification and mapping of materials using spectral feature-based analysis of reflectance data. This report provides step-by-step instructions for installing the PRISM software and running its functions.
SP_Ace: a new code to derive stellar parameters and elemental abundances
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2016-03-01
Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters. A simple Web front end of SP_Ace can be found at http://dc.g-vo.org/SP_ACE while the source code will be published soon. Full Tables D.1-D.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A2
Chalkley, Robert J; Baker, Peter R; Hansen, Kirk C; Medzihradszky, Katalin F; Allen, Nadia P; Rexach, Michael; Burlingame, Alma L
2005-08-01
An in-depth analysis of a multidimensional chromatography-mass spectrometry dataset acquired on a quadrupole selecting, quadrupole collision cell, time-of-flight (QqTOF) geometry instrument was carried out. A total of 3269 CID spectra were acquired. Through manual verification of database search results and de novo interpretation of spectra 2368 spectra could be confidently determined as predicted tryptic peptides. A detailed analysis of the non-matching spectra was also carried out, highlighting what the non-matching spectra in a database search typically are composed of. The results of this comprehensive dataset study demonstrate that QqTOF instruments produce information-rich data of which a high percentage of the data is readily interpretable.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.
1981-01-01
The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.
Paediatric Automatic Phonological Analysis Tools (APAT).
Saraiva, Daniela; Lousada, Marisa; Hall, Andreia; Jesus, Luis M T
2017-12-01
To develop the pediatric Automatic Phonological Analysis Tools (APAT) and to estimate inter and intrajudge reliability, content validity, and concurrent validity. The APAT were constructed using Excel spreadsheets with formulas. The tools were presented to an expert panel for content validation. The corpus used in the Portuguese standardized test Teste Fonético-Fonológico - ALPE produced by 24 children with phonological delay or phonological disorder was recorded, transcribed, and then inserted into the APAT. Reliability and validity of APAT were analyzed. The APAT present strong inter- and intrajudge reliability (>97%). The content validity was also analyzed (ICC = 0.71), and concurrent validity revealed strong correlations between computerized and manual (traditional) methods. The development of these tools contributes to fill existing gaps in clinical practice and research, since previously there were no valid and reliable tools/instruments for automatic phonological analysis, which allowed the analysis of different corpora.
Automatic Conflict Detection on Contracts
NASA Astrophysics Data System (ADS)
Fenech, Stephen; Pace, Gordon J.; Schneider, Gerardo
Many software applications are based on collaborating, yet competing, agents or virtual organisations exchanging services. Contracts, expressing obligations, permissions and prohibitions of the different actors, can be used to protect the interests of the organisations engaged in such service exchange. However, the potentially dynamic composition of services with different contracts, and the combination of service contracts with local contracts can give rise to unexpected conflicts, exposing the need for automatic techniques for contract analysis. In this paper we look at automatic analysis techniques for contracts written in the contract language mathcal{CL}. We present a trace semantics of mathcal{CL} suitable for conflict analysis, and a decision procedure for detecting conflicts (together with its proof of soundness, completeness and termination). We also discuss its implementation and look into the applications of the contract analysis approach we present. These techniques are applied to a small case study of an airline check-in desk.
Optical Automatic Car Identification (OACI) : Volume 1. Advanced System Specification.
DOT National Transportation Integrated Search
1978-12-01
A performance specification is provided in this report for an Optical Automatic Car Identification (OACI) scanner system which features 6% improved readability over existing industry scanner systems. It also includes the analysis and rationale which ...
Disentangling Time-series Spectra with Gaussian Processes: Applications to Radial Velocity Analysis
NASA Astrophysics Data System (ADS)
Czekala, Ian; Mandel, Kaisey S.; Andrews, Sean M.; Dittmann, Jason A.; Ghosh, Sujit K.; Montet, Benjamin T.; Newton, Elisabeth R.
2017-05-01
Measurements of radial velocity variations from the spectroscopic monitoring of stars and their companions are essential for a broad swath of astrophysics; these measurements provide access to the fundamental physical properties that dictate all phases of stellar evolution and facilitate the quantitative study of planetary systems. The conversion of those measurements into both constraints on the orbital architecture and individual component spectra can be a serious challenge, however, especially for extreme flux ratio systems and observations with relatively low sensitivity. Gaussian processes define sampling distributions of flexible, continuous functions that are well-motivated for modeling stellar spectra, enabling proficient searches for companion lines in time-series spectra. We introduce a new technique for spectral disentangling, where the posterior distributions of the orbital parameters and intrinsic, rest-frame stellar spectra are explored simultaneously without needing to invoke cross-correlation templates. To demonstrate its potential, this technique is deployed on red-optical time-series spectra of the mid-M-dwarf binary LP661-13. We report orbital parameters with improved precision compared to traditional radial velocity analysis and successfully reconstruct the primary and secondary spectra. We discuss potential applications for other stellar and exoplanet radial velocity techniques and extensions to time-variable spectra. The code used in this analysis is freely available as an open-source Python package.
NASA Astrophysics Data System (ADS)
Topping, David O.; Allan, James; Rami Alfarra, M.; Aumont, Bernard
2017-06-01
Our ability to model the chemical and thermodynamic processes that lead to secondary organic aerosol (SOA) formation is thought to be hampered by the complexity of the system. While there are fundamental models now available that can simulate the tens of thousands of reactions thought to take place, validation against experiments is highly challenging. Techniques capable of identifying individual molecules such as chromatography are generally only capable of quantifying a subset of the material present, making it unsuitable for a carbon budget analysis. Integrative analytical methods such as the Aerosol Mass Spectrometer (AMS) are capable of quantifying all mass, but because of their inability to isolate individual molecules, comparisons have been limited to simple data products such as total organic mass and the O : C ratio. More detailed comparisons could be made if more of the mass spectral information could be used, but because a discrete inversion of AMS data is not possible, this activity requires a system of predicting mass spectra based on molecular composition. In this proof-of-concept study, the ability to train supervised methods to predict electron impact ionisation (EI) mass spectra for the AMS is evaluated. Supervised Training Regression for the Arbitrary Prediction of Spectra (STRAPS) is not built from first principles. A methodology is constructed whereby the presence of specific mass-to-charge ratio (m/z) channels is fitted as a function of molecular structure before the relative peak height for each channel is similarly fitted using a range of regression methods. The widely used AMS mass spectral database is used as a basis for this, using unit mass resolution spectra of laboratory standards. Key to the fitting process is choice of structural information, or molecular fingerprint. Our approach relies on using supervised methods to automatically optimise the relationship between spectral characteristics and these molecular fingerprints. Therefore, any internal mechanisms or instrument features impacting on fragmentation are implicitly accounted for in the fitted model. Whilst one might expect a collection of keys specifically designed according to EI fragmentation principles to offer a robust basis, the suitability of a range of commonly available fingerprints is evaluated. Using available fingerprints in isolation, initial results suggest the generic public MACCS
fingerprints provide the most accurate trained model when combined with both decision trees and random forests, with median cosine angles of 0.94-0.97 between modelled and measured spectra. There is some sensitivity to choice of fingerprint, but most sensitivity is in choice of regression technique. Support vector machines perform the worst, with median values of 0.78-0.85 and lower ranges approaching 0.4, depending on the fingerprint used. More detailed analysis of modelled versus mass spectra demonstrates important composition-dependent sensitivities on a compound-by-compound basis. This is further demonstrated when we apply the trained methods to a model α-pinene SOA system, using output from the GECKO-A model. This shows that use of a generic fingerprint referred to as FP4
and one designed for vapour pressure predictions (Nanoolal
) gives plausible mass spectra, whilst the use of the MACCS keys in isolation performs poorly in this application, demonstrating the need for evaluating model performance against other SOA systems rather than existing laboratory databases on single compounds. Given the limited number of compounds used within the AMS training dataset, it is difficult to prescribe which combination of approach would lead to a robust generic model across all expected compositions. Nonetheless, the study demonstrates the use of a methodology that would be improved with more training data, fingerprints designed explicitly for fragmentation mechanisms occurring within the AMS, and data from additional mixed systems for further validation. To facilitate further development of the method, including application to other instruments, the model code for re-training is provided via a public Github and Zenodo software repository.
Passive acoustic source localization using sources of opportunity.
Verlinden, Christopher M A; Sarkar, J; Hodgkiss, W S; Kuperman, W A; Sabra, K G
2015-07-01
The feasibility of using data derived replicas from ships of opportunity for implementing matched field processing is demonstrated. The Automatic Identification System (AIS) is used to provide the library coordinates for the replica library and a correlation based processing procedure is used to overcome the impediment that the replica library is constructed from sources with different spectra and will further be used to locate another source with its own unique spectral structure. The method is illustrated with simulation and then verified using acoustic data from a 2009 experiment for which AIS information was retrieved from the United States Coast Guard Navigation Center Nationwide AIS database.
Principle Component Analysis of AIRS and CrIS Data
NASA Technical Reports Server (NTRS)
Aumann, H. H.; Manning, Evan
2015-01-01
Synthetic Eigen Vectors (EV) used for the statistical analysis of the PC reconstruction residual of large ensembles of data are a novel tool for the analysis of data from hyperspectral infrared sounders like the Atmospheric Infrared Sounder (AIRS) on the EOS Aqua and the Cross-track Infrared Sounder (CrIS) on the SUOMI polar orbiting satellites. Unlike empirical EV, which are derived from the observed spectra, the synthetic EV are derived from a large ensemble of spectra which are calculated assuming that, given a state of the atmosphere, the spectra created by the instrument can be accurately calculated. The synthetic EV are then used to reconstruct the observed spectra. The analysis of the differences between the observed spectra and the reconstructed spectra for Simultaneous Nadir Overpasses of tropical oceans reveals unexpected differences at the more than 200 mK level under relatively clear conditions, particularly in the mid-wave water vapor channels of CrIS. The repeatability of these differences using independently trained SEV and results from different years appears to rule out inconsistencies in the radiative transfer algorithm or the data simulation. The reasons for these discrepancies are under evaluation.
[The effects of interpretation bias for social events and automatic thoughts on social anxiety].
Aizawa, Naoki
2015-08-01
Many studies have demonstrated that individuals with social anxiety interpret ambiguous social situations negatively. It is, however, not clear whether the interpretation bias discriminatively contributes to social anxiety in comparison with depressive automatic thoughts. The present study investigated the effects of negative interpretation bias and automatic thoughts on social anxiety. The Social Intent Interpretation-Questionnaire, which measures the tendency to interpret ambiguous social events as implying other's rejective intents, the short Japanese version of the Automatic Thoughts Questionnaire-Revised, and the Anthropophobic Tendency Scale were administered to 317 university students. Covariance structure analysis indicated that both rejective intent interpretation bias and negative automatic thoughts contributed to mental distress in social situations mediated by a sense of powerlessness and excessive concern about self and others in social situations. Positive automatic thoughts reduced mental distress. These results indicate the importance of interpretation bias and negative automatic thoughts in the development and maintenance of social anxiety. Implications for understanding of the cognitive features of social anxiety were discussed.
NASA Astrophysics Data System (ADS)
Parshin, A. S.; Igumenov, A. Yu.; Mikhlin, Yu. L.; Pchelyakov, O. P.; Zhigalov, V. S.
2016-05-01
The inelastic electron scattering cross section spectra of Fe have been calculated based on experimental spectra of characteristic reflection electron energy loss as dependences of the product of the inelastic mean free path by the differential inelastic electron scattering cross section on the electron energy loss. It has been shown that the inelastic electron scattering cross-section spectra have certain advantages over the electron energy loss spectra in the analysis of the interaction of electrons with substance. The peaks of energy loss in the spectra of characteristic electron energy loss and inelastic electron scattering cross sections have been determined from the integral and differential spectra. It has been shown that the energy of the bulk plasmon is practically independent of the energy of primary electrons in the characteristic electron energy loss spectra and monotonically increases with increasing energy of primary electrons in the inelastic electron scattering cross-section spectra. The variation in the maximum energy of the inelastic electron scattering cross-section spectra is caused by the redistribution of intensities over the peaks of losses due to various excitations. The inelastic electron scattering cross-section spectra have been analyzed using the decomposition of the spectra into peaks of the energy loss. This method has been used for the quantitative estimation of the contributions from different energy loss processes to the inelastic electron scattering cross-section spectra of Fe and for the determination of the nature of the energy loss peaks.
Quality Control of True Height Profiles Obtained Automatically from Digital Ionograms.
1982-05-01
nece.,ssary and Identify by block number) Ionosphere Digisonde Electron Density Profile Ionogram Autoscaling ARTIST 2 , ABSTRACT (Continue on reverae...analysis technique currently used with the ionogram traces scaled automatically by the ARTIST software [Reinisch and Huang, 1983; Reinisch et al...19841, and the generalized polynomial analysis technique POLAN [Titheridge, 1985], using the same ARTIST -identified ionogram traces. 2. To determine how
Method and system for calibrating acquired spectra for use in spectral analysis
Reber, Edward L.; Rohde, Kenneth W.; Blackwood, Larry G.
2010-09-14
A method for calibrating acquired spectra for use in spectral analysis includes performing Gaussian peak fitting to spectra acquired by a plurality of NaI detectors to define peak regions. A Na and annihilation doublet may be located among the peak regions. A predetermined energy level may be applied to one of the peaks in the doublet and a location of a hydrogen peak may be predicted based on the location of at least one of the peaks of the doublet. Control systems for calibrating spectra are also disclosed.
NASA Astrophysics Data System (ADS)
Penttilä, Antti; Martikainen, Julia; Gritsevich, Maria; Muinonen, Karri
2018-02-01
Meteorite samples are measured with the University of Helsinki integrating-sphere UV-vis-NIR spectrometer. The resulting spectra of 30 meteorites are compared with selected spectra from the NASA Planetary Data System meteorite spectra database. The spectral measurements are transformed with the principal component analysis, and it is shown that different meteorite types can be distinguished from the transformed data. The motivation is to improve the link between asteroid spectral observations and meteorite spectral measurements.
Measurement of CIB power spectra over large sky areas from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Daisy Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2017-04-01
We present new measurements of the power spectra of the cosmic infrared background (CIB) anisotropies using the Planck 2015 full-mission High frequency instrument data at 353, 545 and 857 GHz over 20 000 deg2. We use techniques similar to those applied for the cosmological analysis of Planck, subtracting dust emission at the power spectrum level. Our analysis gives stable solutions for the CIB power spectra with increasing sky coverage up to about 50 per cent of the sky. These spectra agree well with H I-cleaned spectra from Planck measured on much smaller areas of sky with low Galactic dust emission. At 545 and 857 GHz, our CIB spectra agree well with those measured from Herschel data. We find that the CIB spectra at ℓ ≳ 500 are well fitted by a power-law model for the clustered CIB, with a shallow index γcib = 0.53 ± 0.02. This is consistent with the CIB results at 217 GHz from the cosmological parameter analysis of Planck. We show that a linear combination of the 545 and 857 GHz Planck maps is dominated by the CIB fluctuations at multipoles ℓ ≳ 300.
NASA Astrophysics Data System (ADS)
Suresh, D. M.; Amalanathan, M.; Hubert Joe, I.; Bena Jothy, V.; Diao, Yun-Peng
2014-09-01
The molecular structure, vibrational analysis and molecular docking analysis of the 3-Methyl-1,4-dioxo-1,4-dihydronaphthalen-2-yl 4-aminobenzoate (MDDNAB) molecule have been carried out using FT-IR and FT-Raman spectroscopic techniques and DFT method. The equilibrium geometry, harmonic vibrational wave numbers, various bonding features have been computed using density functional method. The calculated molecular geometry has been compared with experimental data. The detailed interpretation of the vibrational spectra has been carried out by using VEDA program. The hyper-conjugative interactions and charge delocalization have been analyzed using natural bond orbital (NBO) analysis. The simulated FT-IR and FT-Raman spectra satisfactorily coincide with the experimental spectra. The PES and charge analysis have been made. The molecular docking was done to identify the binding energy and the Hydrogen bonding with the cancer protein molecule.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Zhu, Ying; Tan, Tuck Lee
2016-04-15
An effective and simple analytical method using Fourier transform infrared (FTIR) spectroscopy to distinguish wild-grown high-quality Ganoderma lucidum (G. lucidum) from cultivated one is of essential importance for its quality assurance and medicinal value estimation. Commonly used chemical and analytical methods using full spectrum are not so effective for the detection and interpretation due to the complex system of the herbal medicine. In this study, two penalized discriminant analysis models, penalized linear discriminant analysis (PLDA) and elastic net (Elnet),using FTIR spectroscopy have been explored for the purpose of discrimination and interpretation. The classification performances of the two penalized models have been compared with two widely used multivariate methods, principal component discriminant analysis (PCDA) and partial least squares discriminant analysis (PLSDA). The Elnet model involving a combination of L1 and L2 norm penalties enabled an automatic selection of a small number of informative spectral absorption bands and gave an excellent classification accuracy of 99% for discrimination between spectra of wild-grown and cultivated G. lucidum. Its classification performance was superior to that of the PLDA model in a pure L1 setting and outperformed the PCDA and PLSDA models using full wavelength. The well-performed selection of informative spectral features leads to substantial reduction in model complexity and improvement of classification accuracy, and it is particularly helpful for the quantitative interpretations of the major chemical constituents of G. lucidum regarding its anti-cancer effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Electron microprobe mineral analysis guide
NASA Technical Reports Server (NTRS)
Brown, R. W.
1980-01-01
Electron microprobe mineral analysis guide is a compilation of X-ray tables and spectra recorded from various mineral matrices. Spectra were obtained using electron microprobe, equipped with LiF geared, curved crystal X-ray spectrometers, utilizing typical analytical operating conditions: 15 Kv acceleration potential, 0.02 microampere sample current as measured on a clinopyroxene standard (CP19). Tables and spectra are presented for the majority of elements, fluorine through uranium, occurring in mineral samples from lunar, meteoritic and terrestrial sources. Tables for each element contain relevant analytical information, i.e., analyzing crystal, X-ray peak, background and relative intensity information, X-ray interferences and a section containing notes on the measurement. Originally intended to cover silicates and oxide minerals the tables and spectra have been expanded to cover other mineral phases. Electron microprobe mineral analysis guide is intended as a spectral base to which additional spectra can be added as the analyst encounters new mineral matrices.
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
Candiota, A P; Majós, C; Julià-Sapé, M; Cabañas, M; Acebes, J J; Moreno-Torres, A; Griffiths, J R; Arús, C
2011-01-01
MRI and MRS are established methodologies for evaluating intracranial lesions. One MR spectral feature suggested for in vivo grading of astrocytic tumours is the apparent myo-lnositol (ml) intensity (ca 3.55 ppm) at short echo times, although glycine (gly) may also contribute in vivo to this resonance. The purpose of this study was to quantitatively evaluate the ml + gly contribution to the recorded spectral pattern in vivo and correlate it with in vitro data obtained from perchloric acid extraction of tumour biopsies. Patient spectra (n = 95) at 1.5T at short (20-31 ms) and long (135-136 ms) echo times were obtained from the INTERPRET MRS database (http://gabrmn.uab.eslinterpretvalidateddbl). Phantom spectra were acquired with a comparable protocol. Spectra were automatically processed and the ratios of the (ml + gly) to Cr peak heights ((ml + gly)/Cr) calculated. Perchloric acid extracts of brain tumour biopsies were analysed by high-resolution NMR at 9.4T. The ratio (ml + gly)/Cr decreased significantly with astrocytic grade in vivo between low-grade astrocytoma (A2) and glioblastoma multiforme (GBM). In vitro results displayed a somewhat different tendency, with anaplastic astrocytomas having significantly higher (ml + gly)/Cr than A2 and GBM. The discrepancy between in vivo and in vitro data suggests that the NMR visibility of glycine in glial brain tumours is restricted in vivo.
PICKY: a novel SVD-based NMR spectra peak picking method.
Alipanahi, Babak; Gao, Xin; Karakoc, Emre; Donaldson, Logan; Li, Ming
2009-06-15
Picking peaks from experimental NMR spectra is a key unsolved problem for automated NMR protein structure determination. Such a process is a prerequisite for resonance assignment, nuclear overhauser enhancement (NOE) distance restraint assignment, and structure calculation tasks. Manual or semi-automatic peak picking, which is currently the prominent way used in NMR labs, is tedious, time consuming and costly. We introduce new ideas, including noise-level estimation, component forming and sub-division, singular value decomposition (SVD)-based peak picking and peak pruning and refinement. PICKY is developed as an automated peak picking method. Different from the previous research on peak picking, we provide a systematic study of the proposed method. PICKY is tested on 32 real 2D and 3D spectra of eight target proteins, and achieves an average of 88% recall and 74% precision. PICKY is efficient. It takes PICKY on average 15.7 s to process an NMR spectrum. More important than these numbers, PICKY actually works in practice. We feed peak lists generated by PICKY to IPASS for resonance assignment, feed IPASS assignment to SPARTA for fragments generation, and feed SPARTA fragments to FALCON for structure calculation. This results in high-resolution structures of several proteins, for example, TM1112, at 1.25 A. PICKY is available upon request. The peak lists of PICKY can be easily loaded by SPARKY to enable a better interactive strategy for rapid peak picking.
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés
2015-10-29
The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
High-Speed Automatic Microscopy for Real Time Tracks Reconstruction in Nuclear Emulsion
NASA Astrophysics Data System (ADS)
D'Ambrosio, N.
2006-06-01
The Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment will use a massive nuclear emulsion detector to search for /spl nu//sub /spl mu///spl rarr//spl nu//sub /spl tau// oscillation by identifying /spl tau/ leptons through the direct detection of their decay topology. The feasibility of experiments using a large mass emulsion detector is linked to the impressive progress under way in the development of automatic emulsion analysis. A new generation of scanning systems requires the development of fast automatic microscopes for emulsion scanning and image analysis to reconstruct tracks of elementary particles. The paper presents the European Scanning System (ESS) developed in the framework of OPERA collaboration.
Application of Magnetic Nanoparticles in Pretreatment Device for POPs Analysis in Water
NASA Astrophysics Data System (ADS)
Chu, Dongzhi; Kong, Xiangfeng; Wu, Bingwei; Fan, Pingping; Cao, Xuan; Zhang, Ting
2018-01-01
In order to reduce process time and labour force of POPs pretreatment, and solve the problem that extraction column was easily clogged, the paper proposed a new technology of extraction and enrichment which used magnetic nanoparticles. Automatic pretreatment system had automatic sampling unit, extraction enrichment unit and elution enrichment unit. The paper briefly introduced the preparation technology of magnetic nanoparticles, and detailly introduced the structure and control system of automatic pretreatment system. The result of magnetic nanoparticles mass recovery experiments showed that the system had POPs analysis preprocessing capability, and the recovery rate of magnetic nanoparticles were over 70%. In conclusion, the author proposed three points optimization recommendation.
Application of Semantic Tagging to Generate Superimposed Information on a Digital Encyclopedia
NASA Astrophysics Data System (ADS)
Garrido, Piedad; Tramullas, Jesus; Martinez, Francisco J.
We can find in the literature several works regarding the automatic or semi-automatic processing of textual documents with historic information using free software technologies. However, more research work is needed to integrate the analysis of the context and provide coverage to the peculiarities of the Spanish language from a semantic point of view. This research work proposes a novel knowledge-based strategy based on combining subject-centric computing, a topic-oriented approach, and superimposed information. It subsequent combination with artificial intelligence techniques led to an automatic analysis after implementing a made-to-measure interpreted algorithm which, in turn, produced a good number of associations and events with 90% reliability.
iMARS--mutation analysis reporting software: an analysis of spontaneous cII mutation spectra.
Morgan, Claire; Lewis, Paul D
2006-01-31
The sensitivity of any mutational assay is determined by the level at which spontaneous mutations occur in the corresponding untreated controls. Establishing the type and frequency at which mutations occur naturally within a test system is essential if one is to draw scientifically sound conclusions regarding chemically induced mutations. Currently, mutation-spectra analysis is laborious and time-consuming. Thus, we have developed iMARS, a comprehensive mutation-spectrum analysis package that utilises routinely used methodologies and visualisation tools. To demonstrate the use and capabilities of iMARS, we have analysed the distribution, types and sequence context of spontaneous base substitutions derived from the cII gene mutation assay in transgenic animals. Analysis of spontaneous mutation spectra revealed variation both within and between the transgenic rodent test systems Big Blue Mouse, MutaMouse and Big Blue Rat. The most common spontaneous base substitutions were G:C-->A:T transitions and G:C-->T:A transversions. All Big Blue Mouse spectra were significantly different from each other by distribution and nearly all by mutation type, whereas the converse was true for the other test systems. Twenty-eight mutation hotspots were observed across all spectra generally occurring in CG, GA/TC, GG and GC dinucleotides. A mutation hotspot at nucleotide 212 occurred at a higher frequency in MutaMouse and Big Blue Rat. In addition, CG dinucleotides were the most mutable in all spectra except two Big Blue Mouse spectra. Thus, spontaneous base-substitution spectra showed more variation in distribution, type and sequence context in Big Blue Mouse relative to spectra derived from MutaMouse and Big Blue Rat. The results of our analysis provide a baseline reference for mutation studies utilising the cII gene in transgenic rodent models. The potential differences in spontaneous base-substitution spectra should be considered when making comparisons between these test systems. The ease at which iMARS has allowed us to carry out an exhaustive investigation to assess mutation distribution, mutation type, strand bias, target sequences and motifs, as well as predict mutation hotspots provides us with a valuable tool in helping to distinguish true chemically induced hotspots from background mutations and gives a true reflection of mutation frequency.
NASA Astrophysics Data System (ADS)
Ahmed, Nasar; Umar, Zeshan A.; Ahmed, Rizwan; Aslam Baig, M.
2017-10-01
We present qualitative and quantitative analysis of the trace elements present in different brands of tobacco available in Pakistan using laser induced breakdown spectroscopy (LIBS) and Laser ablation Time of Flight Mass Spectrometer (LA-TOFMS). The compositional analysis using the calibration free LIBS technique is based on the observed emission spectra of the laser produced plasma plume whereas the elemental composition analysis using LA-TOFMS is based on the mass spectra of the ions produced by laser ablation. The optical emission spectra of these samples contain spectral lines of calcium, magnesium, sodium, potassium, silicon, strontium, barium, lithium and aluminum with varying intensities. The corresponding mass spectra of the elements were detected in LA-TOF-MS with their composition concentration. The analysis of different brands of cigarettes demonstrates that LIBS coupled with a LA-TOF-MS is a powerful technique for the elemental analysis of the trace elements in any solid sample.
Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei
2010-08-01
During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Automatic high throughput empty ISO container verification
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-04-01
Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.
Disentangling Time-series Spectra with Gaussian Processes: Applications to Radial Velocity Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czekala, Ian; Mandel, Kaisey S.; Andrews, Sean M.
Measurements of radial velocity variations from the spectroscopic monitoring of stars and their companions are essential for a broad swath of astrophysics; these measurements provide access to the fundamental physical properties that dictate all phases of stellar evolution and facilitate the quantitative study of planetary systems. The conversion of those measurements into both constraints on the orbital architecture and individual component spectra can be a serious challenge, however, especially for extreme flux ratio systems and observations with relatively low sensitivity. Gaussian processes define sampling distributions of flexible, continuous functions that are well-motivated for modeling stellar spectra, enabling proficient searches formore » companion lines in time-series spectra. We introduce a new technique for spectral disentangling, where the posterior distributions of the orbital parameters and intrinsic, rest-frame stellar spectra are explored simultaneously without needing to invoke cross-correlation templates. To demonstrate its potential, this technique is deployed on red-optical time-series spectra of the mid-M-dwarf binary LP661-13. We report orbital parameters with improved precision compared to traditional radial velocity analysis and successfully reconstruct the primary and secondary spectra. We discuss potential applications for other stellar and exoplanet radial velocity techniques and extensions to time-variable spectra. The code used in this analysis is freely available as an open-source Python package.« less
NASA Astrophysics Data System (ADS)
Mikuła, Andrzej; Król, Magdalena; Mozgawa, Włodzimierz; Koleżyński, Andrzej
2018-04-01
Vibrational spectroscopy can be considered as one of the most important methods used for structural characterization of various porous aluminosilicate materials, including zeolites. On the other hand, vibrational spectra of zeolites are still difficult to interpret, particularly in the pseudolattice region, where bands related to ring oscillations can be observed. Using combination of theoretical and computational approach, a detailed analysis of these regions of spectra is possible; such analysis should be, however, carried out employing models with different level of complexity and simultaneously the same theory level. In this work, an attempt was made to identify ring oscillations in vibrational spectra of selected zeolite structures. A series of ab initio calculations focused on S4R, S6R, and as a novelty, 5-1 isolated clusters, as well as periodic siliceous frameworks built from those building units (ferrierite (FER), mordenite (MOR) and heulandite (HEU) type) have been carried out. Due to the hierarchical structure of zeolite frameworks it can be expected that the total envelope of the zeolite spectra should be with good accuracy a sum of the spectra of structural elements that build each zeolite framework. Based on the results of HF calculations, normal vibrations have been visualized and detailed analysis of pseudolattice range of resulting theoretical spectra have been carried out. Obtained results have been applied for interpretation of experimental spectra of selected zeolites.
Frequency-domain nonlinear regression algorithm for spectral analysis of broadband SFG spectroscopy.
He, Yuhan; Wang, Ying; Wang, Jingjing; Guo, Wei; Wang, Zhaohui
2016-03-01
The resonant spectral bands of the broadband sum frequency generation (BB-SFG) spectra are often distorted by the nonresonant portion and the lineshapes of the laser pulses. Frequency domain nonlinear regression (FDNLR) algorithm was proposed to retrieve the first-order polarization induced by the infrared pulse and to improve the analysis of SFG spectra through simultaneous fitting of a series of time-resolved BB-SFG spectra. The principle of FDNLR was presented, and the validity and reliability were tested by the analysis of the virtual and measured SFG spectra. The relative phase, dephasing time, and lineshapes of the resonant vibrational SFG bands can be retrieved without any preset assumptions about the SFG bands and the incident laser pulses.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
Automatic contact in DYNA3D for vehicle crashworthiness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whirley, R.G.; Engelmann, B.E.
1993-07-15
This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less
NASA Astrophysics Data System (ADS)
Meksiarun, Phiranuphon; Ishigaki, Mika; Huck-Pezzei, Verena A. C.; Huck, Christian W.; Wongravee, Kanet; Sato, Hidetoshi; Ozaki, Yukihiro
2017-03-01
This study aimed to extract the paraffin component from paraffin-embedded oral cancer tissue spectra using three multivariate analysis (MVA) methods; Independent Component Analysis (ICA), Partial Least Squares (PLS) and Independent Component - Partial Least Square (IC-PLS). The estimated paraffin components were used for removing the contribution of paraffin from the tissue spectra. These three methods were compared in terms of the efficiency of paraffin removal and the ability to retain the tissue information. It was found that ICA, PLS and IC-PLS could remove the paraffin component from the spectra at almost the same level while Principal Component Analysis (PCA) was incapable. In terms of retaining cancer tissue spectral integrity, effects of PLS and IC-PLS on the non-paraffin region were significantly less than that of ICA where cancer tissue spectral areas were deteriorated. The paraffin-removed spectra were used for constructing Raman images of oral cancer tissue and compared with Hematoxylin and Eosin (H&E) stained tissues for verification. This study has demonstrated the capability of Raman spectroscopy together with multivariate analysis methods as a diagnostic tool for the paraffin-embedded tissue section.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R. Scott; Kay, Bruce D.
The desorption kinetics for benzene and cyclohexane from a graphene covered Pt(111) surface were investigated using temperature programmed desorption (TPD). The benzene desorption spectra show well-resolved monolayer and multilayer desorption peaks. The benzene monolayer TPD spectra have the same desorption peak temperature and have line shapes which are consistent with first-order desorption kinetics. For benzene coverages greater than 1 ML, the TPD spectra align on a common leading edge which is consistent with zero-order desorption. An inversion analysis of the monolayer benzene TPD spectra yielded a desorption activation energy of 54 ± 3 kJ/mol with a prefactor of 1017 ±more » 1 s-1. The TPD spectra for cyclohexane also have well-resolved monolayer and multilayer desorption features. The desorption leading edges for the monolayer and the multilayer TPD spectra are aligned indicating zero-order desorption kinetics in both cases. An Arrhenius analysis of the monolayer cyclohexane TPD spectra yielded a desorption activation energy of 53.5 ± 2 kJ/mol with a prefactor of 1016 ± 1 ML s-1.« less
On a program manifold's stability of one contour automatic control systems
NASA Astrophysics Data System (ADS)
Zumatov, S. S.
2017-12-01
Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.
Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters
NASA Astrophysics Data System (ADS)
Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun
2015-05-01
We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.I. Rudyka; Y.E. Zingerman; K.G. Lavrov
Up-to-date mathematical methods, such as correlation analysis and expert systems, are employed in creating a model of the coking process. Automatic coking-control systems developed by Giprokoks rule out human error. At an existing coke battery, after introducing automatic control, the heating-gas consumption is reduced by {>=}5%.
ERIC Educational Resources Information Center
Kurtz, Peter; And Others
This report is concerned with the implementation of two interrelated computer systems: an automatic document analysis and classification package, and an on-line interactive information retrieval system which utilizes the information gathered during the automatic classification phase. Well-known techniques developed by Salton and Dennis have been…
Locally linear embedding: dimension reduction of massive protostellar spectra
NASA Astrophysics Data System (ADS)
Ward, J. L.; Lumsden, S. L.
2016-09-01
We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.
2015-12-01
The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.
1995-06-01
Energy efficient, 30 and 40 watt ballasts are Rapid Start, thermally protected, automatic resetting. Class P, high or low power factor as required...BALLASTS Energy efficient, 30 ana 40 watt Rapic Start, thermally protected, automatic resetting. Class P. high power factor, CEM, sound rated A. unless...BALLASTS Energy efficient, 40 Watt Rapid Start, thermally protected, automatic resetting, Class P, high power factor, CBM, sound rated A, unless