Sample records for probe summarization algorithms

  1. Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields

    NASA Technical Reports Server (NTRS)

    Rediniotis, Othon K.

    1999-01-01

    Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.

  2. Automatic Text Summarization for Indonesian Language Using TextTeaser

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Pasaribu, A.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    Text summarization is one of the solution for information overload. Reducing text without losing the meaning not only can save time to read, but also maintain the reader’s understanding. One of many algorithms to summarize text is TextTeaser. Originally, this algorithm is intended to be used for text in English. However, due to TextTeaser algorithm does not consider the meaning of the text, we implement this algorithm for text in Indonesian language. This algorithm calculates four elements, such as title feature, sentence length, sentence position and keyword frequency. We utilize TextRank, an unsupervised and language independent text summarization algorithm, to evaluate the summarized text yielded by TextTeaser. The result shows that the TextTeaser algorithm needs more improvement to obtain better accuracy.

  3. Dimensionality of Data Matrices with Applications to Gene Expression Profiles

    ERIC Educational Resources Information Center

    Feng, Xingdong

    2009-01-01

    Probe-level microarray data are usually stored in matrices. Take a given probe set (gene), for example, each row of the matrix corresponds to an array, and each column corresponds to a probe. Often, people summarize each array by the gene expression level. Is one number sufficient to summarize a whole probe set for a specific gene in an array?…

  4. Developing Wide-Field Spatio-Spectral Interferometry for Far-Infrared Space Applications

    NASA Technical Reports Server (NTRS)

    Leisawitz, David; Bolcar, Matthew R.; Lyon, Richard G.; Maher, Stephen F.; Memarsadeghi, Nargess; Rinehart, Stephen A.; Sinukoff, Evan J.

    2012-01-01

    Interferometry is an affordable way to bring the benefits of high resolution to space far-IR astrophysics. We summarize an ongoing effort to develop and learn the practical limitations of an interferometric technique that will enable the acquisition of high-resolution far-IR integral field spectroscopic data with a single instrument in a future space-based interferometer. This technique was central to the Space Infrared Interferometric Telescope (SPIRIT) and Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) space mission design concepts, and it will first be used on the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). Our experimental approach combines data from a laboratory optical interferometer (the Wide-field Imaging Interferometry Testbed, WIIT), computational optical system modeling, and spatio-spectral synthesis algorithm development. We summarize recent experimental results and future plans.

  5. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    PubMed Central

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  6. A geometric approach to identify cavities in particle systems

    NASA Astrophysics Data System (ADS)

    Voyiatzis, Evangelos; Böhm, Michael C.; Müller-Plathe, Florian

    2015-11-01

    The implementation of a geometric algorithm to identify cavities in particle systems in an open-source python program is presented. The algorithm makes use of the Delaunay space tessellation. The present python software is based on platform-independent tools, leading to a portable program. Its successful execution provides information concerning the accessible volume fraction of the system, the size and shape of the cavities and the group of atoms forming each of them. The program can be easily incorporated into the LAMMPS software. An advantage of the present algorithm is that no a priori assumption on the cavity shape has to be made. As an example, the cavity size and shape distributions in a polyethylene melt system are presented for three spherical probe particles. This paper serves also as an introductory manual to the script. It summarizes the algorithm, its implementation, the required user-defined parameters as well as the format of the input and output files. Additionally, we demonstrate possible applications of our approach and compare its capability with the ones of well documented cavity size estimators.

  7. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  8. Pioneer Jupiter orbiter probe mission 1980, probe description

    NASA Technical Reports Server (NTRS)

    Defrees, R. E.

    1974-01-01

    The adaptation of the Saturn-Uranus Atmospheric Entry Probe (SUAEP) to a Jupiter entry probe is summarized. This report is extracted from a comprehensive study of Jovian missions, atmospheric model definitions and probe subsystem alternatives.

  9. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  10. ParseCNV integrative copy number variation association software with quality tracking

    PubMed Central

    Glessner, Joseph T.; Li, Jin; Hakonarson, Hakon

    2013-01-01

    A number of copy number variation (CNV) calling algorithms exist; however, comprehensive software tools for CNV association studies are lacking. We describe ParseCNV, unique software that takes CNV calls and creates probe-based statistics for CNV occurrence in both case–control design and in family based studies addressing both de novo and inheritance events, which are then summarized based on CNV regions (CNVRs). CNVRs are defined in a dynamic manner to allow for a complex CNV overlap while maintaining precise association region. Using this approach, we avoid failure to converge and non-monotonic curve fitting weaknesses of programs, such as CNVtools and CNVassoc, and although Plink is easy to use, it only provides combined CNV state probe-based statistics, not state-specific CNVRs. Existing CNV association methods do not provide any quality tracking information to filter confident associations, a key issue which is fully addressed by ParseCNV. In addition, uncertainty in CNV calls underlying CNV associations is evaluated to verify significant results, including CNV overlap profiles, genomic context, number of probes supporting the CNV and single-probe intensities. When optimal quality control parameters are followed using ParseCNV, 90% of CNVs validate by polymerase chain reaction, an often problematic stage because of inadequate significant association review. ParseCNV is freely available at http://parsecnv.sourceforge.net. PMID:23293001

  11. ParseCNV integrative copy number variation association software with quality tracking.

    PubMed

    Glessner, Joseph T; Li, Jin; Hakonarson, Hakon

    2013-03-01

    A number of copy number variation (CNV) calling algorithms exist; however, comprehensive software tools for CNV association studies are lacking. We describe ParseCNV, unique software that takes CNV calls and creates probe-based statistics for CNV occurrence in both case-control design and in family based studies addressing both de novo and inheritance events, which are then summarized based on CNV regions (CNVRs). CNVRs are defined in a dynamic manner to allow for a complex CNV overlap while maintaining precise association region. Using this approach, we avoid failure to converge and non-monotonic curve fitting weaknesses of programs, such as CNVtools and CNVassoc, and although Plink is easy to use, it only provides combined CNV state probe-based statistics, not state-specific CNVRs. Existing CNV association methods do not provide any quality tracking information to filter confident associations, a key issue which is fully addressed by ParseCNV. In addition, uncertainty in CNV calls underlying CNV associations is evaluated to verify significant results, including CNV overlap profiles, genomic context, number of probes supporting the CNV and single-probe intensities. When optimal quality control parameters are followed using ParseCNV, 90% of CNVs validate by polymerase chain reaction, an often problematic stage because of inadequate significant association review. ParseCNV is freely available at http://parsecnv.sourceforge.net.

  12. Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun

    2018-04-01

    The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.

  13. Text Summarization Model based on Maximum Coverage Problem and its Variant

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.

  14. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.

    PubMed

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.

  15. Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra

    NASA Astrophysics Data System (ADS)

    Wang, Hefeng; Fan, Heng; Li, Fuli

    2014-01-01

    When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.

  16. Recent advances in high-performance fluorescent and bioluminescent RNA imaging probes.

    PubMed

    Xia, Yuqiong; Zhang, Ruili; Wang, Zhongliang; Tian, Jie; Chen, Xiaoyuan

    2017-05-22

    RNA plays an important role in life processes. Imaging of messenger RNAs (mRNAs) and micro-RNAs (miRNAs) not only allows us to learn the formation and transcription of mRNAs and the biogenesis of miRNAs involved in various life processes, but also helps in detecting cancer. High-performance RNA imaging probes greatly expand our view of life processes and enhance the cancer detection accuracy. In this review, we summarize the state-of-the-art high-performance RNA imaging probes, including exogenous probes that can image RNA sequences with special modification and endogeneous probes that can directly image endogenous RNAs without special treatment. For each probe, we review its structure and imaging principle in detail. Finally, we summarize the application of mRNA and miRNA imaging probes in studying life processes as well as in detecting cancer. By correlating the structures and principles of various probes with their practical uses, we compare different RNA imaging probes and offer guidance for better utilization of the current imaging probes and the future design of higher-performance RNA imaging probes.

  17. An entangling-probe attack on Shor's algorithm for factorization

    NASA Astrophysics Data System (ADS)

    Azuma, Hiroo

    2018-02-01

    We investigate how to attack Shor's quantum algorithm for factorization with an entangling probe. We show that an attacker can steal an exact solution of Shor's algorithm outside an institute where the quantum computer is installed if he replaces its initialized quantum register with entangled qubits, namely the entangling probe. He can apply arbitrary local operations to his own probe. Moreover, we assume that there is an unauthorized person who helps the attacker to commit a crime inside the institute. He tells garbage data obtained from measurements of the quantum register to the attacker secretly behind a legitimate user's back. If the attacker succeeds in cracking Shor's algorithm, the legitimate user obtains a random answer and does not notice the attacker's illegal acts. We discuss how to detect the attacker. Finally, we estimate a probability that the quantum algorithm inevitably makes an error, of which the attacker can take advantage.

  18. Database architecture and query structures for probe data processing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report summarizes findings and implementations of probe vehicle data collection based on Bluetooth MAC address matching technology. Probe vehicle travel time data are studied in the following field deployment case studies: analysis of traffic ch...

  19. Capturing User Reading Behaviors for Personalized Document Summarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  20. Outer planet entry probe system study. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    General mission considerations and science prospectus, which are of a general nature that applies to several or all planetary applications, are presented. Five probe systems are defined: nominal Jupiter probe system, and Jupiter probe-dedicated alternative probe system, Jupiter spacecraft radiation-compatible alternative probe system, Saturn probe system, and Saturn probe applicability for Uranus. Parametric analysis is summarized for mission analysis of a general nature, and then for specific missions to Jupiter, Saturn, Uranus, and Neptune. The program is also discussed from the hardware availability viewpoint and the aspect of commonality.

  1. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  2. Chinese Text Summarization Algorithm Based on Word2vec

    NASA Astrophysics Data System (ADS)

    Chengzhang, Xu; Dan, Liu

    2018-02-01

    In order to extract some sentences that can cover the topic of a Chinese article, a Chinese text summarization algorithm based on Word2vec is used in this paper. Words in an article are represented as vectors trained by Word2vec, the weight of each word, the sentence vector and the weight of each sentence are calculated by combining word-sentence relationship with graph-based ranking model. Finally the summary is generated on the basis of the final sentence vector and the final weight of the sentence. The experimental results on real datasets show that the proposed algorithm has a better summarization quality compared with TF-IDF and TextRank.

  3. SU-G-JeP3-05: Geometry Based Transperineal Ultrasound Probe Positioning for Image Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps, S; With, P de; Verhaegen, F

    2016-06-15

    Purpose: The use of ultrasound (US) imaging in radiotherapy is not widespread, primarily due to the need for skilled operators performing the scans. Automation of probe positioning has the potential to remove this need and minimize operator dependence. We introduce an algorithm for obtaining a US probe position that allows good anatomical structure visualization based on clinical requirements. The first application is on 4D transperineal US images of prostate cancer patients. Methods: The algorithm calculates the probe position and orientation using anatomical information provided by a reference CT scan, always available in radiotherapy workflows. As initial test, we apply themore » algorithm on a CIRS pelvic US phantom to obtain a set of possible probe positions. Subsequently, five of these positions are randomly chosen and used to acquire actual US volumes of the phantom. Visual inspection of these volumes reveal if the whole prostate, and adjacent edges of bladder and rectum are fully visualized, as clinically required. In addition, structure positions on the acquired US volumes are compared to predictions of the algorithm. Results: All acquired volumes fulfill the clinical requirements as specified in the previous section. Preliminary quantitative evaluation was performed on thirty consecutive slices of two volumes, on which the structures are easily recognizable. The mean absolute distances (MAD) between actual anatomical structure positions and positions predicted by the algorithm were calculated. This resulted in MAD of 2.4±0.4 mm for prostate, 3.2±0.9 mm for bladder and 3.3±1.3 mm for rectum. Conclusion: Visual inspection and quantitative evaluation show that the algorithm is able to propose probe positions that fulfill all clinical requirements. The obtained MAD is on average 2.9 mm. However, during evaluation we assumed no errors in structure segmentation and probe positioning. In future steps, accurate estimation of these errors will allow for better evaluation of the achieved accuracy.« less

  4. Spectrally And Temporally Resolved Low-Light Level Video Microscopy

    NASA Astrophysics Data System (ADS)

    Wampler, John E.; Furukawa, Ruth; Fechheimer, Marcus

    1989-12-01

    The IDG law-light video microscope system was designed to aid studies of localization of subcellular luminescence sources and stimulus/response coupling in single living cells using luminescent probes. Much of the motivation for design of this instrument system came from the pioneering efforts of Dr. Reynolds (Reynolds, Q. Rev. Biophys. 5, 295-347; Reynolds and Taylor, Bioscience 30, 586-592) who showed the value of intensified video camera systems for detection and localizion of fluorescence and bioluminescence signals from biological tissues. Our instrument system has essentially two roles, 1) localization and quantitation of very weak bioluminescence signals and 2) quantitation of intracellular environmental characteristics such as pH and calcium ion concentrations using fluorescent and bioluminescent probes. The instrument system exhibits over one million fold operating range allowing visualization and enhancement of quantum limited images with quantum limited response, spectral analysis of fluorescence signals, and transmitted light imaging. The computer control of the system implements rapid switching between light regimes, spatially resolved spectral scanning, and digital data processing for spectral shape analysis and for detailed analysis of the statistical distribution of single cell measurements. The system design and software algorithms used by the system are summarized. These design criteria are illustrated with examples taken from studies of bioluminescence, applications of bioluminescence to study developmental processes and gene expression in single living cells, and applications of fluorescent probes to study stimulus/response coupling in living cells.

  5. Application of travel time information for traffic management.

    DOT National Transportation Integrated Search

    2012-03-01

    This report summarizes findings and implementations of probe vehicle data collection based on Bluetooth MAC address matching : technology. Probe vehicle travel time data are studied in the following field deployment case studies: analysis of traffic ...

  6. A Survey on Sentiment Classification in Face Recognition

    NASA Astrophysics Data System (ADS)

    Qian, Jingyu

    2018-01-01

    Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.

  7. An overview of smart grid routing algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  8. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    NASA Astrophysics Data System (ADS)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  9. Constrained independent component analysis approach to nonobtrusive pulse rate measurements.

    PubMed

    Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  10. System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft

    NASA Technical Reports Server (NTRS)

    Pullen, Samuel P.; Parkinson, Bradford W.

    1994-01-01

    This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.

  11. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  12. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  13. Iterative methods for plasma sheath calculations: Application to spherical probe

    NASA Technical Reports Server (NTRS)

    Parker, L. W.; Sullivan, E. C.

    1973-01-01

    The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.

  14. Survey of PRT Vehicle Management Algorithms

    DOT National Transportation Integrated Search

    1974-01-01

    The document summarizes the results of a literature survey of state of the art vehicle management algorithms applicable to Personal Rapid Transit Systems(PRT). The surveyed vehicle management algorithms are organized into a set of five major componen...

  15. PhylArray: phylogenetic probe design algorithm for microarray.

    PubMed

    Militon, Cécile; Rimour, Sébastien; Missaoui, Mohieddine; Biderre, Corinne; Barra, Vincent; Hill, David; Moné, Anne; Gagne, Geneviève; Meier, Harald; Peyretaillade, Eric; Peyret, Pierre

    2007-10-01

    Microbial diversity is still largely unknown in most environments, such as soils. In order to get access to this microbial 'black-box', the development of powerful tools such as microarrays are necessary. However, the reliability of this approach relies on probe efficiency, in particular sensitivity, specificity and explorative power, in order to obtain an image of the microbial communities that is close to reality. We propose a new probe design algorithm that is able to select microarray probes targeting SSU rRNA at any phylogenetic level. This original approach, implemented in a program called 'PhylArray', designs a combination of degenerate and non-degenerate probes for each target taxon. Comparative experimental evaluations indicate that probes designed with PhylArray yield a higher sensitivity and specificity than those designed by conventional approaches. Applying the combined PhyArray/GoArrays strategy helps to optimize the hybridization performance of short probes. Finally, hybridizations with environmental targets have shown that the use of the PhylArray strategy can draw attention to even previously unknown bacteria.

  16. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  17. A localization algorithm of adaptively determining the ROI of the reference circle in image

    NASA Astrophysics Data System (ADS)

    Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.

  18. An ultrashort-pulse reconstruction software: GROG, applied to the FLAME laser system

    NASA Astrophysics Data System (ADS)

    Galletti, Mario

    2016-03-01

    The GRENOUILLE traces of FLAME Probe line pulses (60mJ, 10mJ after compression, 70fs, 1cm FWHM, 10Hz) were acquired in the FLAME Front End Area (FFEA) at the Laboratori Nazionali di Frascati (LNF), Instituto Nazionale di Fisica Nucleare (INFN). The complete characterization of the laser pulse parameters was made using a new algorithm: GRenouille/FrOG (GROG). A characterization with a commercial algorithm, QUICKFrog, was also made. The temporal and spectral parameters came out to be in great agreement for the two kinds of algorithms. In this experimental campaign the Probe line of FLAME has been completely characterized and it has been showed how GROG, the developed algorithm, works as well as QuickFrog algorithm with this type of pulse class.

  19. Measurements With a Split-Fiber Probe in Complex Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2004-01-01

    A split-fiber probe was used to acquire unsteady data in a research compressor. A calibration method was devised for a split-fiber probe, and a new algorithm was developed to decompose split-fiber probe signals into velocity magnitude and direction. The algorithm is based on the minimum value of a merit function that is built over the entire range of flow velocities for which the probe was calibrated. The split-fiber probe performance and signal decomposition was first verified in a free-jet facility by comparing the data from three thermo-anemometric probes, namely a single-wire, a single-fiber, and the split-fiber probe. All three probes performed extremely well as far as the velocity magnitude was concerned. However, there are differences in the peak values of measured velocity unsteadiness in the jet shear layer. The single-wire probe indicates the highest unsteadiness level, followed closely by the split-fiber probe. The single-fiber probe indicates a noticeably lower level of velocity unsteadiness. Experiments in the NASA Low Speed Axial Compressor facility revealed similar results. The mean velocities agreed well, and differences in the velocity unsteadiness are similar to the case of a free jet. A reason for these discrepancies is in the different frequency response characteristics of probes used. It follows that the single-fiber probe has the slowest frequency response. In summary, the split-fiber probe worked reliably during the entire program. The acquired data averaged in time followed closely data acquired by conventional pneumatic probes.

  20. A survey of gas-side fouling measuring devices

    NASA Technical Reports Server (NTRS)

    Marner, W. J.; Henslee, S. P.

    1984-01-01

    A survey of measuring devices or probes, which were used to investigate gas side fouling, was carried out. Five different types of measuring devices are identified and discussed including: heat flux meters, mass accumulation probes, optical devices, deposition probes, and acid condensation probes. A total of 32 different probes are described in detail and summarized in matrix or tabular form. The important considerations of combustion gas characterization and deposit analysis are also given a significant amount of attention. The results show that considerable work was done in the development of gas side fouling probes. However, it is clear that the design, construction, and testing of a durable, versatile probe - capable of monitoring on-line fouling resistances - remains a formidable task.

  1. Hybridization properties of long nucleic acid probes for detection of variable target sequences, and development of a hybridization prediction algorithm

    PubMed Central

    Öhrmalm, Christina; Jobs, Magnus; Eriksson, Ronnie; Golbob, Sultan; Elfaitouri, Amal; Benachenhou, Farid; Strømme, Maria; Blomberg, Jonas

    2010-01-01

    One of the main problems in nucleic acid-based techniques for detection of infectious agents, such as influenza viruses, is that of nucleic acid sequence variation. DNA probes, 70-nt long, some including the nucleotide analog deoxyribose-Inosine (dInosine), were analyzed for hybridization tolerance to different amounts and distributions of mismatching bases, e.g. synonymous mutations, in target DNA. Microsphere-linked 70-mer probes were hybridized in 3M TMAC buffer to biotinylated single-stranded (ss) DNA for subsequent analysis in a Luminex® system. When mismatches interrupted contiguous matching stretches of 6 nt or longer, it had a strong impact on hybridization. Contiguous matching stretches are more important than the same number of matching nucleotides separated by mismatches into several regions. dInosine, but not 5-nitroindole, substitutions at mismatching positions stabilized hybridization remarkably well, comparable to N (4-fold) wobbles in the same positions. In contrast to shorter probes, 70-nt probes with judiciously placed dInosine substitutions and/or wobble positions were remarkably mismatch tolerant, with preserved specificity. An algorithm, NucZip, was constructed to model the nucleation and zipping phases of hybridization, integrating both local and distant binding contributions. It predicted hybridization more exactly than previous algorithms, and has the potential to guide the design of variation-tolerant yet specific probes. PMID:20864443

  2. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    PubMed Central

    Ramanujam, Nedunchelian; Kaliappan, Manivannan

    2016-01-01

    Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971

  3. Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data

    NASA Technical Reports Server (NTRS)

    Zion, P. M.

    1983-01-01

    The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.

  4. Application of Self Nulling Eddy Current Probe Technique to the Detection of Fatigue Crack Initiation and Control of Test Procedures

    NASA Technical Reports Server (NTRS)

    Namkung, M.; Nath, S.; Wincheski, B.; Fulton, J. P.

    1994-01-01

    A major part of fracture mechanics is concerned with studying the initiation and propagation of fatigue cracks. This typically requires constant monitoring of crack growth during fatigue cycles and the knowledge of the precise location of the crack tip at any given time. One technique currently available for measuring fatigue crack length is the Potential Drop method. The method, however, may be inaccurate if the direction of crack growth deviates considerably from what was assumed initially or the curvature of the crack becomes significant. Another popular approach is to optically view the crack using a high magnification microscope, but this entails a person constantly monitoring it. The present proposed technique uses an automated scheme, in order to eliminate the need for a person to constantly monitor the experiment. Another technique under development elsewhere is to digitize an optical image of the test specimen surface and then apply a pattern recognition algorithm to locate the crack tip. A previous publication showed that the self nulling eddy current probe successfully tracked a simulated crack in an aluminum sample. This was the impetus to develop an online real time crack monitoring system. An automated system has been developed which includes a two axis scanner mounted on the tensile testing machine, the probe and its instrumentation and a personal computer (PC) to communicate and control all the parameters. The system software controls the testing parameters as well as monitoring the fatigue crack as it propagates. This paper will discuss the experimental setup in detail and demonstrate its capabilities. A three dimensional finite element model is utilized to model the magnetic field distribution due to the probe and how the probe voltage changes as it scans the crack. Experimental data of the probe for different samples under zero load, static load and high cycle fatigue load will be discussed. The final section summarizes the major accomplishments of the present work, the elements of the future R&D needs and the advantages and disadvantages of using this system in the laboratory and field.

  5. Preprocessing of gene expression data by optimally robust estimators

    PubMed Central

    2010-01-01

    Background The preprocessing of gene expression data obtained from several platforms routinely includes the aggregation of multiple raw signal intensities to one expression value. Examples are the computation of a single expression measure based on the perfect match (PM) and mismatch (MM) probes for the Affymetrix technology, the summarization of bead level values to bead summary values for the Illumina technology or the aggregation of replicated measurements in the case of other technologies including real-time quantitative polymerase chain reaction (RT-qPCR) platforms. The summarization of technical replicates is also performed in other "-omics" disciplines like proteomics or metabolomics. Preprocessing methods like MAS 5.0, Illumina's default summarization method, RMA, or VSN show that the use of robust estimators is widely accepted in gene expression analysis. However, the selection of robust methods seems to be mainly driven by their high breakdown point and not by efficiency. Results We describe how optimally robust radius-minimax (rmx) estimators, i.e. estimators that minimize an asymptotic maximum risk on shrinking neighborhoods about an ideal model, can be used for the aggregation of multiple raw signal intensities to one expression value for Affymetrix and Illumina data. With regard to the Affymetrix data, we have implemented an algorithm which is a variant of MAS 5.0. Using datasets from the literature and Monte-Carlo simulations we provide some reasoning for assuming approximate log-normal distributions of the raw signal intensities by means of the Kolmogorov distance, at least for the discussed datasets, and compare the results of our preprocessing algorithms with the results of Affymetrix's MAS 5.0 and Illumina's default method. The numerical results indicate that when using rmx estimators an accuracy improvement of about 10-20% is obtained compared to Affymetrix's MAS 5.0 and about 1-5% compared to Illumina's default method. The improvement is also visible in the analysis of technical replicates where the reproducibility of the values (in terms of Pearson and Spearman correlation) is increased for all Affymetrix and almost all Illumina examples considered. Our algorithms are implemented in the R package named RobLoxBioC which is publicly available via CRAN, The Comprehensive R Archive Network (http://cran.r-project.org/web/packages/RobLoxBioC/). Conclusions Optimally robust rmx estimators have a high breakdown point and are computationally feasible. They can lead to a considerable gain in efficiency for well-established bioinformatics procedures and thus, can increase the reproducibility and power of subsequent statistical analysis. PMID:21118506

  6. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  7. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  8. Research On Vehicle-Based Driver Status/Performance Monitoring; Development, Validation, And Refinement Of Algorithms For Detection Of Driver Drowsiness, Final Report

    DOT National Transportation Integrated Search

    1994-12-01

    THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...

  9. Outer planet entry probe system study. Volume 4: Common Saturn/Uranus probe studies

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Results are summarized of a common scientific probe study to explore the atmospheres of Saturn and Uranus. This was a three-month follow-on effort to the Outer Planet Entry Probe System study. The report presents: (1) a summary, conclusions and recommendations of this study, (2) parametric analysis conducted to support the two system definitions, (3) common Saturn/Uranus probe system definition using the Science Advisory Group's exploratory payload and, (4) common Saturn/Uranus probe system definition using an expanded science complement. Each of the probe system definitions consists of detailed discussions of the mission, science, system and subsystems including telecommunications, data handling, power, pyrotechnics, attitude control, structures, propulsion, thermal control and probe-to-spacecraft integration. References are made to the contents of the first three volumes where it is feasible to do so.

  10. Summarizing Simulation Results using Causally-relevant States

    PubMed Central

    Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth

    2016-01-01

    As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620

  11. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  12. Fluorescent Probes and Selective Inhibitors for Biological Studies of Hydrogen Sulfide- and Polysulfide-Mediated Signaling.

    PubMed

    Takano, Yoko; Echizen, Honami; Hanaoka, Kenjiro

    2017-10-01

    Hydrogen sulfide (H 2 S) plays roles in many physiological processes, including relaxation of vascular smooth muscles, mediation of neurotransmission, inhibition of insulin signaling, and regulation of inflammation. Also, hydropersulfide (R-S-SH) and polysulfide (-S-S n -S-) have recently been identified as reactive sulfur species (RSS) that regulate the bioactivities of multiple proteins via S-sulfhydration of cysteine residues (protein Cys-SSH) and show cytoprotection. Chemical tools such as fluorescent probes and selective inhibitors are needed to establish in detail the physiological roles of H 2 S and polysulfide. Recent Advances: Although many fluorescent probes for H 2 S are available, fluorescent probes for hydropersulfide and polysulfide have only recently been developed and used to detect these sulfur species in living cells. In this review, we summarize recent progress in developing chemical tools for the study of H 2 S, hydropersulfide, and polysulfide, covering fluorescent probes based on various design strategies and selective inhibitors of H 2 S- and polysulfide-producing enzymes (cystathionine γ-lyase, cystathionine β-synthase, and 3-mercaptopyruvate sulfurtransferase), and we summarize their applications in biological studies. Despite recent progress, the precise biological functions of H 2 S, hydropersulfide, and polysulfide remain to be fully established. Fluorescent probes and selective inhibitors are effective chemical tools to study the physiological roles of these sulfur molecules in living cells and tissues. Therefore, further development of a broad range of practical fluorescent probes and selective inhibitors as tools for studies of RSS biology is currently attracting great interest. Antioxid. Redox Signal. 27, 669-683.

  13. Distance Probes of Dark Energy

    DOE PAGES

    Kim, A. G.; Padmanabhan, N.; Aldering, G.; ...

    2015-03-15

    We present the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). This document summarizes the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.

  14. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  15. Automatic and user-centric approaches to video summary evaluation

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.; Bentley, Frank

    2007-01-01

    Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.

  16. Photoaffinity labeling in target- and binding-site identification

    PubMed Central

    Smith, Ewan; Collins, Ian

    2015-01-01

    Photoaffinity labeling (PAL) using a chemical probe to covalently bind its target in response to activation by light has become a frequently used tool in drug discovery for identifying new drug targets and molecular interactions, and for probing the location and structure of binding sites. Methods to identify the specific target proteins of hit molecules from phenotypic screens are highly valuable in early drug discovery. In this review, we summarize the principles of PAL including probe design and experimental techniques for in vitro and live cell investigations. We emphasize the need to optimize and validate probes and highlight examples of the successful application of PAL across multiple disease areas. PMID:25686004

  17. Aeronautics and space report of the President, 1980 activities

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The year's achievements in the areas of communication, Earth resources, environment, space sciences, transportation, and space energy are summarized and current and planned activities in these areas at the various departments and agencies of the Federal Government are summarized. Tables show U.S. and world spacecraft records, spacecraft launchings for 1980, and scientific payload anf probes launched 1975-1980. Budget data are included.

  18. Floating Potential Probe Langmuir Probe Data Reduction Results

    NASA Technical Reports Server (NTRS)

    Morton, Thomas L.; Minow, Joseph I.

    2002-01-01

    During its first five months of operations, the Langmuir Probe on the Floating Potential Probe (FPP) obtained data on ionospheric electron densities and temperatures in the ISS orbit. In this paper, the algorithms for data reduction are presented, and comparisons are made of FPP data with ground-based ionosonde and Incoherent Scattering Radar (ISR) results. Implications for ISS operations are detailed, and the need for a permanent FPP on ISS is examined.

  19. Work on Planetary Atmospheres and Planetary Atmosphere Probes

    NASA Technical Reports Server (NTRS)

    Seiff, Alvin; Lester, Peter

    1999-01-01

    A major objective of the grant was to complete the fabrication, test, and evaluation of the atmosphere structure experiment on the Galileo Probe, and to receive, analyze, and interpret data received from the spacecraft. The grantee was competitively selected to be Principal Investigator of Jupiter's atmosphere structure on the Galileo Probe. His primary motivation was to learn as much as possible about Jupiter's atmosphere by means of a successful atmosphere structure experiment, and to support the needs and schedule of the Galileo Project. After a number of launch delays, the Flight instrument was shipped to Kennedy Space Center 2 years after the start of this collaboration, on April 14, 1989, at which time it was determined from System level tests of the ASI on the Probe that the instrument was in good working order and ready for flight. The spacecraft was launched on October 18, 1989. Data analysis of test and calibration data taken over a period of years of instrument testing was continued in preparation for the encounter. The initial instrument checkout in space was performed on October 26, 1989. The data set received by telemetry was thoroughly analyzed, and a report of the findings was transmitted to the Probe Operations Office on Feb. 28, 1990. Key findings reported were that the accelerometer biases had shifted by less than 1 mg through launch and since calibration at Bell Aerospace in 1983; accelerometer scale factors, evaluated by means of calibration currents, fell on lines of variation with temperature established in laboratory calibrations; pressure sensor offsets, correlated as a function of temperature, fell generally within the limits of several years of ground test data; atmospheric and engineering temperature sensor data were internally consistent within a few tenths of a degree; and the instrument electronics performed all expected functions without any observable fault. Altogether, this checkout was highly encouraging of the prospects of instrument performance, although performed greater than 5 years prior to Jupiter encounter. Capability of decoding the science data from the Experiment Data Record to be provided at encounter was developed and exercised using the tape recording of the first Cruise Checkout data. A team effort was organized to program the selection and combination of data words defining pressure, temperature, acceleration, turbulence, and engineering quantities; to apply decalibration algorithms to convert readings from digital numbers to physical quantities; and to organize the data into a suitable printout. A paper on the Galileo Atmosphere Structure Instrument was written and submitted for publication in a special issue of Space Science Reviews. At the Journal editor's request, the grantee reviewed other Probe instrument papers submitted for this special issue. Calibration data were carefully taken for all experiment sensors and accumulated over a period of 10 years. The data were analyzed, fitted with algorithms, and summarized in a calibration report for use in analyzing and interpreting data returned from Jupiter's atmosphere. The sensors included were the primary science pressure, temperature, and acceleration sensors, and the supporting engineering temperature sensors. This report was distributed to experiment coinvestigators and the Probe Project Office.

  20. Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.

    2017-03-01

    Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.

  1. Using clustering and a modified classification algorithm for automatic text summarization

    NASA Astrophysics Data System (ADS)

    Aries, Abdelkrime; Oufaida, Houda; Nouali, Omar

    2013-01-01

    In this paper we describe a modified classification method destined for extractive summarization purpose. The classification in this method doesn't need a learning corpus; it uses the input text to do that. First, we cluster the document sentences to exploit the diversity of topics, then we use a learning algorithm (here we used Naive Bayes) on each cluster considering it as a class. After obtaining the classification model, we calculate the score of a sentence in each class, using a scoring model derived from classification algorithm. These scores are used, then, to reorder the sentences and extract the first ones as the output summary. We conducted some experiments using a corpus of scientific papers, and we have compared our results to another summarization system called UNIS.1 Also, we experiment the impact of clustering threshold tuning, on the resulted summary, as well as the impact of adding more features to the classifier. We found that this method is interesting, and gives good performance, and the addition of new features (which is simple using this method) can improve summary's accuracy.

  2. Fluorescent Probes and Selective Inhibitors for Biological Studies of Hydrogen Sulfide- and Polysulfide-Mediated Signaling

    PubMed Central

    Takano, Yoko; Echizen, Honami

    2017-01-01

    Abstract Significance: Hydrogen sulfide (H2S) plays roles in many physiological processes, including relaxation of vascular smooth muscles, mediation of neurotransmission, inhibition of insulin signaling, and regulation of inflammation. Also, hydropersulfide (R−S−SH) and polysulfide (−S−Sn−S−) have recently been identified as reactive sulfur species (RSS) that regulate the bioactivities of multiple proteins via S-sulfhydration of cysteine residues (protein Cys−SSH) and show cytoprotection. Chemical tools such as fluorescent probes and selective inhibitors are needed to establish in detail the physiological roles of H2S and polysulfide. Recent Advances: Although many fluorescent probes for H2S are available, fluorescent probes for hydropersulfide and polysulfide have only recently been developed and used to detect these sulfur species in living cells. Critical Issues: In this review, we summarize recent progress in developing chemical tools for the study of H2S, hydropersulfide, and polysulfide, covering fluorescent probes based on various design strategies and selective inhibitors of H2S- and polysulfide-producing enzymes (cystathionine γ-lyase, cystathionine β-synthase, and 3-mercaptopyruvate sulfurtransferase), and we summarize their applications in biological studies. Future Directions: Despite recent progress, the precise biological functions of H2S, hydropersulfide, and polysulfide remain to be fully established. Fluorescent probes and selective inhibitors are effective chemical tools to study the physiological roles of these sulfur molecules in living cells and tissues. Therefore, further development of a broad range of practical fluorescent probes and selective inhibitors as tools for studies of RSS biology is currently attracting great interest. Antioxid. Redox Signal. 27, 669–683. PMID:28443673

  3. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  4. Bilevel thresholding of sliced image of sludge floc.

    PubMed

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  5. A novel interplanetary optical navigation algorithm based on Earth-Moon group photos by Chang'e-5T1 probe

    NASA Astrophysics Data System (ADS)

    Bu, Yanlong; Zhang, Qiang; Ding, Chibiao; Tang, Geshi; Wang, Hang; Qiu, Rujin; Liang, Libo; Yin, Hejun

    2017-02-01

    This paper presents an interplanetary optical navigation algorithm based on two spherical celestial bodies. The remarkable characteristic of the method is that key navigation parameters can be estimated depending entirely on known sizes and ephemerides of two celestial bodies, especially positioning is realized through a single image and does not rely on traditional terrestrial radio tracking any more. Actual Earth-Moon group photos captured by China's Chang'e-5T1 probe were used to verify the effectiveness of the algorithm. From 430,000 km away from the Earth, the camera pointing accuracy reaches 0.01° (one sigma) and the inertial positioning error is less than 200 km, respectively; meanwhile, the cost of the ground control and human resources are greatly reduced. The algorithm is flexible, easy to implement, and can provide reference to interplanetary autonomous navigation in the solar system.

  6. Oceanic Basement Probed

    ERIC Educational Resources Information Center

    Cann, J. R.; Moore, David G.

    1978-01-01

    Summarizes findings of the deep sea drilling project at Scripps Institute of Oceanology. Results of Atlantic and Pacific Ocean drillings in terms of the composition and properties of the sea floor are discussed. (CP)

  7. Measurement of Turbulent Pressure and Temperature Fluctuations in a Gas Turbine Combustor

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis (Technical Monitor); LaGraff, John E.; Bramanti, Cristina; Pldfield, Martin; Passaro, Andrea; Biagioni, Leonardo

    2004-01-01

    The report summarizes the results of the redesign efforts directed towards the gas-turbine combustor rapid-injector flow diagnostic probe developed under sponsorship of NASA-GRC and earlier reported in NASA-CR-2003-212540. Lessons learned during the theoretical development, developmental testing and field-testing in the previous phase of this research were applied to redesign of both the probe sensing elements and of the rapid injection device. This redesigned probe (referred to herein as Turboprobe) has been fabricated and is ready, along with the new rapid injector, for field-testing. The probe is now designed to capture both time-resolved and mean total temperatures, total pressures and, indirectly, one component of turbulent fluctuations.

  8. Advances in development of fluorescent probes for detecting amyloid-β aggregates.

    PubMed

    Xu, Ming-Ming; Ren, Wen-Ming; Tang, Xi-Can; Hu, You-Hong; Zhang, Hai-Yan

    2016-06-01

    With accumulating evidence suggesting that amyloid-β (Aβ) deposition is a good diagnostic biomarker for Alzheimer's disease (AD), the discovery of active Aβ probes has become an active area of research. Among the existing imaging methods, optical imaging targeting Aβ aggregates (fibrils or oligomers), especially using near-infrared (NIR) fluorescent probes, is increasingly recognized as a promising approach for the early diagnosis of AD due to its real time detection, low cost, lack of radioactive exposure and high-resolution. In the past decade, a variety of fluorescent probes have been developed and tested for efficiency in vitro, and several probes have shown efficacy in AD transgenic mice. This review classifies these representative probes based on their chemical structures and functional modes (dominant solvent-dependent mode and a novel solvent-independent mode). Moreover, the pharmaceutical characteristics of these representative probes are summarized and discussed. This review provides important perspectives for the future development of novel NIR Aβ diagnostic probes.

  9. Advances in development of fluorescent probes for detecting amyloid-β aggregates

    PubMed Central

    Xu, Ming-ming; Ren, Wen-ming; Tang, Xi-can; Hu, You-hong; Zhang, Hai-yan

    2016-01-01

    With accumulating evidence suggesting that amyloid-β (Aβ) deposition is a good diagnostic biomarker for Alzheimer's disease (AD), the discovery of active Aβ probes has become an active area of research. Among the existing imaging methods, optical imaging targeting Aβ aggregates (fibrils or oligomers), especially using near-infrared (NIR) fluorescent probes, is increasingly recognized as a promising approach for the early diagnosis of AD due to its real time detection, low cost, lack of radioactive exposure and high-resolution. In the past decade, a variety of fluorescent probes have been developed and tested for efficiency in vitro, and several probes have shown efficacy in AD transgenic mice. This review classifies these representative probes based on their chemical structures and functional modes (dominant solvent-dependent mode and a novel solvent-independent mode). Moreover, the pharmaceutical characteristics of these representative probes are summarized and discussed. This review provides important perspectives for the future development of novel NIR Aβ diagnostic probes. PMID:26997567

  10. Prediction of Particle Number Density and Particle Properties in the Flow Field Observed by the Nephelometer Experiment on the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Naughton, Jonathan W.

    1998-01-01

    This report summarizes the work performed to assist in the analysis of data returned from the Galileo Probe's Nephelometer instrument. A computation of the flow field around the Galileo Probe during its descent through the Jovian atmosphere was simulated. The behavior of cloud particles that passed around the Galileo probe was then computed and the number density in the vicinity of the Nephelometer instrument was predicted. The results of our analysis support the finding that the number density of cloud particles was not the same in each of the four sampling volumes of the Nephelometer instrument. The number densities calculated in this study are currently being used to assist in the reanalysis of the data returned from the Galileo Probe.

  11. Measurement of Air Flow Characteristics Using Seven-Hole Cone Probes

    NASA Technical Reports Server (NTRS)

    Takahashi, Timothy T.

    1997-01-01

    The motivation for this work has been the development of a wake survey system. A seven-hole probe can measure the distribution of static pressure, total pressure, and flow angularity in a wind tunnel environment. The author describes the development of a simple, very efficient algorithm to compute flow properties from probe tip pressures. Its accuracy and applicability to unsteady, turbulent flow are discussed.

  12. m-BIRCH: an online clustering approach for computer vision applications

    NASA Astrophysics Data System (ADS)

    Madan, Siddharth K.; Dana, Kristin J.

    2015-03-01

    We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.

  13. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  14. Chemical probes targeting epigenetic proteins: Applications beyond oncology

    PubMed Central

    Ackloo, Suzanne; Brown, Peter J.; Müller, Susanne

    2017-01-01

    ABSTRACT Epigenetic chemical probes are potent, cell-active, small molecule inhibitors or antagonists of specific domains in a protein; they have been indispensable for studying bromodomains and protein methyltransferases. The Structural Genomics Consortium (SGC), comprising scientists from academic and pharmaceutical laboratories, has generated most of the current epigenetic chemical probes. Moreover, the SGC has shared about 4 thousand aliquots of these probes, which have been used primarily for phenotypic profiling or to validate targets in cell lines or primary patient samples cultured in vitro. Epigenetic chemical probes have been critical tools in oncology research and have uncovered mechanistic insights into well-established targets, as well as identify new therapeutic starting points. Indeed, the literature primarily links epigenetic proteins to oncology, but applications in inflammation, viral, metabolic and neurodegenerative diseases are now being reported. We summarize the literature of these emerging applications and provide examples where existing probes might be used. PMID:28080202

  15. A computer program for borehole compensation of dual-detector density well logs

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.

  16. The changing landscape of dermatology practice: melanoma and pump-probe laser microscopy.

    PubMed

    Puza, Charles J; Mosca, Paul J

    2017-11-01

    To present current melanoma diagnosis, staging, prognosis, and treatment algorithms and how recent advances in laser pump-probe microscopy will fill in the gaps in our clinical understanding. Expert opinion and significantly cited articles identified in SCOPUS were used in conjunction with a pubmed database search on Melanoma practice guidelines from the last 10 years. Significant advances in melanoma treatment have been made over the last decade. However, proper treatment algorithm and prognostic information per melanoma stage remain controversial. The next step for providers will involve the identification of patient population(s) that can benefit from recent advances. One method of identifying potential patients is through new laser imaging techniques. Pump-probe laser microscopy has been shown to correctly identify nevi from melanoma and furthermore stratify melanoma by aggressiveness. The recent development of effective adjuvant therapies for melanoma is promising and should be utilized on appropriate patient populations that can potentially be identified using pump-probe laser microscopy.

  17. Automated clustering of probe molecules from solvent mapping of protein surfaces: new algorithms applied to hot-spot mapping and structure-based drug design

    NASA Astrophysics Data System (ADS)

    Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.

    2008-10-01

    Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.

  18. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  19. Retinal vessel segmentation on SLO image

    PubMed Central

    Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.

    2010-01-01

    A scanning laser ophthalmoscopy (SLO) image, taken from optical coherence tomography (OCT), usually has lower global/local contrast and more noise compared to the traditional retinal photograph, which makes the vessel segmentation challenging work. A hybrid algorithm is proposed to efficiently solve these problems by fusing several designed methods, taking the advantages of each method and reducing the error measurements. The algorithm has several steps consisting of image preprocessing, thresholding probe and weighted fusing. Four different methods are first designed to transform the SLO image into feature response images by taking different combinations of matched filter, contrast enhancement and mathematical morphology operators. A thresholding probe algorithm is then applied on those response images to obtain four vessel maps. Weighted majority opinion is used to fuse these vessel maps and generate a final vessel map. The experimental results showed that the proposed hybrid algorithm could successfully segment the blood vessels on SLO images, by detecting the major and small vessels and suppressing the noises. The algorithm showed substantial potential in various clinical applications. The use of this method can be also extended to medical image registration based on blood vessel location. PMID:19163149

  20. Challenges in Analyzing and Representing Cloud Microphysical Data Measured with Airborne Cloud Probes

    NASA Astrophysics Data System (ADS)

    Baumgardner, D.; Freer, M.; McFarquhar, G. M.; Heymsfield, A.; Cziczo, D. J.

    2014-12-01

    There are a variety of in-situ instruments that are deployed on aircraft for measuring cloud properties, some of which provide data which are used to produce number and mass concentrations of water droplets and ice crystals and their size and shape distributions. Each of these instruments has its strengths and limitations that must be recognized and taken into account during analysis of the data. Various processing techniques have been developed by different groups and techniques implemented to partially correct for the known uncertainties and limitations. The cloud measurement community has in general acknowledged the various issues associated with these instruments and numerous studies have published processing algorithms that seek to improve data quality; however, there has not been a forum in which these various algorithms and processing techniques have been discussed and consensus reached both on optimum analysis strategy and on quantification of uncertainties on the derived data products. Prior to the 2014 AMS Cloud Physics Conference, a study was conducted in which many data sets taken from various aircraft (NCAR-130, North Dakota Citation, Wyoming King Air and FAAM BAE-146) and many instruments (FSSP, CDP, SID, 2D-C/P, CIP/PIP, 2D-S, CPI, Nevzorov Probe and King Hot-wire LWC sensor) were processed by more than 20 individuals or groups to produce a large number of derived products (size distributions, ice fraction, number and mass concentrations, CCN/IN concentrations and median volume diameter). Each person or group that processed a selected data set used their own software and algorithm to produce a secondary data file with derived parameters whose name was encoded to conceal the source of the file so that this was a blind comparison. The workshop that was convened July 5 and 6, 2014, presented the results of the evaluation of the derived products with respect to individual instruments as well as the types of conditions under which the measurements were made. This comparison will ultimately allow quantifying the error bars of derived parameters as a function of the conditions in which the observations are made. The results of this evaluation and the recommendations that evolved from the workshop will be summarized in this presentation.

  1. Pump-probe optical microscopy for imaging nonfluorescent chromophores.

    PubMed

    Wei, Lu; Min, Wei

    2012-06-01

    Many chromophores absorb light intensely but have undetectable fluorescence. Hence microscopy techniques other than fluorescence are highly desirable for imaging these chromophores inside live cells, tissues, and organisms. The recently developed pump-probe optical microscopy techniques provide fluorescence-free contrast mechanisms by employing several fundamental light-molecule interactions including excited state absorption, stimulated emission, ground state depletion, and the photothermal effect. By using the pump pulse to excite molecules and the subsequent probe pulse to interrogate the created transient states on a laser scanning microscope, pump-probe microscopy offers imaging capability with high sensitivity and specificity toward nonfluorescent chromophores. Single-molecule sensitivity has even been demonstrated. Here we review and summarize the underlying principles of this emerging class of molecular imaging techniques.

  2. Group normalization for genomic data.

    PubMed

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  3. Group Normalization for Genomic Data

    PubMed Central

    Ghandi, Mahmoud; Beer, Michael A.

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets. PMID:22912661

  4. Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.

    1999-07-28

    As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less

  5. Colliding Winds and Tomography of O-Type Binaries

    NASA Technical Reports Server (NTRS)

    Gies, Dougles R.

    1995-01-01

    This grant was awarded in support of an observational study with the NASA IUE Observatory during the 15th episode (1992), and it subsequently also supported our continuing work in 16th (1994) and 18th (1995) episodes. The project involved the study of FUV spectra of massive spectroscopic binary systems containing hot stars of spectral type O. We applied a Doppler tomography algorithm to reconstruct the individual component UV spectra of stars in order to obtain improved estimates of the temperature, gravity, UV intensity ratio, and projected rotational velocity for stars in each system, and to make a preliminary survey for abundance anomalies through comparison with standard spectra. We also investigated the orbital phase-related variations in the UV stellar wind lines to probe the geometries of wind-wind collisions in these systems. The project directly supported two Ph.D. dissertations at Georgia State University (by Penny and Thaller), and we are grateful for this support. No inventions were made in the performance of this work. Detailed results are summarized in the abstracts listed in the following section.

  6. Validation of Afterbody Aeroheating Predictions for Planetary Probes: Status and Future Work

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; Brown, James L.; Sinha, Krishnendu; Candler, Graham V.; Milos, Frank S.; Prabhu, DInesh K.

    2005-01-01

    A review of the relevant flight conditions and physical models for planetary probe afterbody aeroheating calculations is given. Readily available sources of afterbody flight data and published attempts to computationally simulate those flights are summarized. A current status of the application of turbulence models to afterbody flows is presented. Finally, recommendations for additional analysis and testing that would reduce our uncertainties in our ability to accurately predict base heating levels are given.

  7. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  8. New approaches to investigating social gestures in autism spectrum disorder

    PubMed Central

    2012-01-01

    The combination of economic games and human neuroimaging presents the possibility of using economic probes to identify biomarkers for quantitative features of healthy and diseased cognition. These probes span a range of important cognitive functions, but one new use is in the domain of reciprocating social exchange with other humans - a capacity perturbed in a number of psychopathologies. We summarize the use of a reciprocating exchange game to elicit neural and behavioral signatures for subjects diagnosed with autism spectrum disorder (ASD). Furthermore, we outline early efforts to capture features of social exchange in computational models and use these to identify quantitative behavioral differences between subjects with ASD and matched controls. Lastly, we summarize a number of subsequent studies inspired by the modeling results, which suggest new neural and behavioral signatures that could be used to characterize subtle deficits in information processing during interactions with other humans. PMID:22958572

  9. Discerning the Chemistry in Individual Organelles with Small-Molecule Fluorescent Probes.

    PubMed

    Xu, Wang; Zeng, Zebing; Jiang, Jian-Hui; Chang, Young-Tae; Yuan, Lin

    2016-10-24

    Principle has it that even the most advanced super-resolution microscope would be futile in providing biological insight into subcellular matrices without well-designed fluorescent tags/probes. Developments in biology have increasingly been boosted by advances of chemistry, with one prominent example being small-molecule fluorescent probes that not only allow cellular-level imaging, but also subcellular imaging. A majority, if not all, of the chemical/biological events take place inside cellular organelles, and researchers have been shifting their attention towards these substructures with the help of fluorescence techniques. This Review summarizes the existing fluorescent probes that target chemical/biological events within a single organelle. More importantly, organelle-anchoring strategies are described and emphasized to inspire the design of new generations of fluorescent probes, before concluding with future prospects on the possible further development of chemical biology. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Design and spacecraft-integration of RTGs for solar probe

    NASA Technical Reports Server (NTRS)

    Schock, A.; Noravian, H.; Or, T.; Sankarankandath, V.

    1990-01-01

    The design, analysis, and spacecraft integration of radioisotope thermoelectric generators (RTG) to power the Solar Probe under study at NASA JPL is described. The mission of the Solar Probe is to explore the solar corona by performing in situ measurements at up to four solar radii to the sun. Design constraints for the RTG are discussed. The chief challenge in the design and system integration of the Solar Probe's RTG is a heat rejection problem. Two RTG orientations, horizontal and oblique, are analyzed for effectiveness and results are summarized in chart form. A number of cooling strategies are also investigated, including heat-pipe and reflector-cooled options. A methodology and general computer code are presented for analyzing the performance of arbitrarily obstructed RTGs with both axial and circumferential temperature, voltage, and current variation. This methodology is applied to the specific example of the Solar Probe RTG obstructed by a semicylindrical reflector of 15-inch radius.

  11. Informationally Efficient Multi-User Communication

    DTIC Science & Technology

    2010-01-01

    DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm

  12. An introduction to the theory of ptychographic phase retrieval methods

    NASA Astrophysics Data System (ADS)

    Konijnenberg, Sander

    2017-12-01

    An overview of several ptychographic phase retrieval methods and the theory behind them is presented. By looking into the theory behind more basic single-intensity pattern phase retrieval methods, a theoretical framework is provided for analyzing ptychographic algorithms. Extensions of ptychographic algorithms that deal with issues such as partial coherence, thick samples, or uncertainties of the probe or probe positions are also discussed. This introduction is intended for scientists and students without prior experience in the field of phase retrieval or ptychography to quickly get introduced to the theory, so that they can put the more specialized literature in context more easily.

  13. High density DNA microarrays: algorithms and biomedical applications.

    PubMed

    Liu, Wei-Min

    2004-08-01

    DNA microarrays are devices capable of detecting the identity and abundance of numerous DNA or RNA segments in samples. They are used for analyzing gene expressions, identifying genetic markers and detecting mutations on a genomic scale. The fundamental chemical mechanism of DNA microarrays is the hybridization between probes and targets due to the hydrogen bonds of nucleotide base pairing. Since the cross hybridization is inevitable, and probes or targets may form undesirable secondary or tertiary structures, the microarray data contain noise and depend on experimental conditions. It is crucial to apply proper statistical algorithms to obtain useful signals from noisy data. After we obtained the signals of a large amount of probes, we need to derive the biomedical information such as the existence of a transcript in a cell, the difference of expression levels of a gene in multiple samples, and the type of a genetic marker. Furthermore, after the expression levels of thousands of genes or the genotypes of thousands of single nucleotide polymorphisms are determined, it is usually important to find a small number of genes or markers that are related to a disease, individual reactions to drugs, or other phenotypes. All these applications need careful data analyses and reliable algorithms.

  14. US-Japan collaborative research on probe data : assessment report.

    DOT National Transportation Integrated Search

    1998-11-01

    This flyer summarizes the identified human factors research needs for integrated in-vehicle systems for Commercial Vehicle Operations (CVO), one of five configurations of in-vehicle safety and driver information systems. A complete review of the rese...

  15. Automatic summarization of changes in biological image sequences using algorithmic information theory.

    PubMed

    Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath

    2009-08-01

    An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.

  16. Statistical analysis of an RNA titration series evaluates microarray precision and sensitivity on a whole-array basis

    PubMed Central

    Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K

    2006-01-01

    Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209

  17. Lunar surface engineering properties experiment definition

    NASA Technical Reports Server (NTRS)

    Mitchell, J. K.; Goodman, R. E.; Hurlbut, F. C.; Houston, W. N.; Willis, D. R.; Witherspoon, P. A.; Hovland, H. J.

    1971-01-01

    Research on the mechanics of lunar soils and on developing probes to determine the properties of lunar surface materials is summarized. The areas of investigation include the following: soil simulation, soil property determination using an impact penetrometer, soil stabilization using urethane foam or phenolic resin, effects of rolling boulders down lunar slopes, design of borehole jack and its use in determining failure mechanisms and properties of rocks, and development of a permeability probe for measuring fluid flow through porous lunar surface materials.

  18. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  19. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  20. Research status of multi - robot systems task allocation and uncertainty treatment

    NASA Astrophysics Data System (ADS)

    Li, Dahui; Fan, Qi; Dai, Xuefeng

    2017-08-01

    The multi-robot coordination algorithm has become a hot research topic in the field of robotics in recent years. It has a wide range of applications and good application prospects. This paper analyzes and summarizes the current research status of multi-robot coordination algorithms at home and abroad. From task allocation and dealing with uncertainty, this paper discusses the multi-robot coordination algorithm and presents the advantages and disadvantages of each method commonly used.

  1. Dynamic variable selection in SNP genotype autocalling from APEX microarray data.

    PubMed

    Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J

    2006-11-30

    Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.

  2. Design of 240,000 orthogonal 25mer DNA barcode probes.

    PubMed

    Xu, Qikai; Schlabach, Michael R; Hannon, Gregory J; Elledge, Stephen J

    2009-02-17

    DNA barcodes linked to genetic features greatly facilitate screening these features in pooled formats using microarray hybridization, and new tools are needed to design large sets of barcodes to allow construction of large barcoded mammalian libraries such as shRNA libraries. Here we report a framework for designing large sets of orthogonal barcode probes. We demonstrate the utility of this framework by designing 240,000 barcode probes and testing their performance by hybridization. From the test hybridizations, we also discovered new probe design rules that significantly reduce cross-hybridization after their introduction into the framework of the algorithm. These rules should improve the performance of DNA microarray probe designs for many applications.

  3. Design of 240,000 orthogonal 25mer DNA barcode probes

    PubMed Central

    Xu, Qikai; Schlabach, Michael R.; Hannon, Gregory J.; Elledge, Stephen J.

    2009-01-01

    DNA barcodes linked to genetic features greatly facilitate screening these features in pooled formats using microarray hybridization, and new tools are needed to design large sets of barcodes to allow construction of large barcoded mammalian libraries such as shRNA libraries. Here we report a framework for designing large sets of orthogonal barcode probes. We demonstrate the utility of this framework by designing 240,000 barcode probes and testing their performance by hybridization. From the test hybridizations, we also discovered new probe design rules that significantly reduce cross-hybridization after their introduction into the framework of the algorithm. These rules should improve the performance of DNA microarray probe designs for many applications. PMID:19171886

  4. Fluorescent probes for lipid rafts: from model membranes to living cells.

    PubMed

    Klymchenko, Andrey S; Kreder, Rémy

    2014-01-16

    Membrane microdomains (rafts) remain one of the controversial issues in biophysics. Fluorescent molecular probes, which make these lipid nanostructures visible through optical techniques, are one of the tools currently used to study lipid rafts. The most common are lipophilic fluorescent probes that partition specifically into liquid ordered or liquid disordered phase. Their partition depends on the lipid composition of a given phase, which complicates their use in cellular membranes. A second class of probes is based on environment-sensitive dyes, which partition into both phases, but stain them by different fluorescence color, intensity, or lifetime. These probes can directly address the properties of each separate phase, but their cellular applications are still limited. The present review focuses on summarizing the current state in the field of developing and applying fluorescent molecular probes to study lipid rafts. We highlight an urgent need to develop new probes, specifically adapted for cell plasma membranes and compatible with modern fluorescence microscopy techniques to push the understanding of membrane microdomains forward. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.

    PubMed

    Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho

    2017-10-01

    This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.

  6. Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts.

    PubMed

    Baker, Wesley B; Parthasarathy, Ashwin B; Ko, Tiffany S; Busch, David R; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C; Durduran, Turgut; Greenberg, Joel H; Kung, David K; Yodh, Arjun G

    2015-07-01

    We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm's ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy.

  7. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  8. Substructure System Identification for Finite Element Model Updating

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Blades, Eric L.

    1997-01-01

    This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.

  9. A spline-based parameter and state estimation technique for static models of elastic surfaces

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Daniel, P. L.; Armstrong, E. S.

    1983-01-01

    Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.

  10. Chemists, Engineers Probe Mutual Problems.

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1980

    1980-01-01

    Summarizes recommendations made in a workshop sponsored by the American Chemical Society concerning issues involving the diverging viewpoints of chemistry and chemical engineering. Includes recommendations regarding curricula, salary differences, and the need to change attitudes of chemistry faculty toward industry and industrial chemistry. (CS)

  11. Self-Field-Dominated Plasma

    DTIC Science & Technology

    1998-03-31

    plasma focus discharges. Part of the tests summarized here address methods and means for achieving controlled variations of the current sheath (CS) structure via electrode geometry modifications. CS parameters are monitored with multiple magnetic probes in the case of cylindrical - and open-funnel electrode

  12. Recent Advances in the Design of Electro-Optic Sensors for Minimally Destructive Microwave Field Probing

    PubMed Central

    Lee, Dong-Joon; Kang, No-Weon; Choi, Jun-Ho; Kim, Junyeon; Whitaker, John F.

    2011-01-01

    In this paper we review recent design methodologies for fully dielectric electro-optic sensors that have applications in non-destructive evaluation (NDE) of devices and materials that radiate, guide, or otherwise may be impacted by microwave fields. In many practical NDE situations, fiber-coupled-sensor configurations are preferred due to their advantages over free-space bulk sensors in terms of optical alignment, spatial resolution, and especially, a low degree of field invasiveness. We propose and review five distinct types of fiber-coupled electro-optic sensor probes. The design guidelines for each probe type and their performances in absolute electric-field measurements are compared and summarized. PMID:22346604

  13. Parachute Dynamics Investigations Using a Sensor Package Airdropped from a Small-Scale Airplane

    NASA Technical Reports Server (NTRS)

    Dooley, Jessica; Lorenz, Ralph D.

    2005-01-01

    We explore the utility of various sensors by recovering parachute-probe dynamics information from a package released from a small-scale, remote-controlled airplane. The airdrops aid in the development of datasets for the exploration of planetary probe trajectory recovery algorithms, supplementing data collected from instrumented, full-scale tests and computer models.

  14. Challenges and Opportunities for Small-Molecule Fluorescent Probes in Redox Biology Applications.

    PubMed

    Jiang, Xiqian; Wang, Lingfei; Carroll, Shaina L; Chen, Jianwei; Wang, Meng C; Wang, Jin

    2018-02-16

    The concentrations of reactive oxygen/nitrogen species (ROS/RNS) are critical to various biochemical processes. Small-molecule fluorescent probes have been widely used to detect and/or quantify ROS/RNS in many redox biology studies and serve as an important complementary to protein-based sensors with unique applications. Recent Advances: New sensing reactions have emerged in probe development, allowing more selective and quantitative detection of ROS/RNS, especially in live cells. Improvements have been made in sensing reactions, fluorophores, and bioavailability of probe molecules. In this review, we will not only summarize redox-related small-molecule fluorescent probes but also lay out the challenges of designing probes to help redox biologists independently evaluate the quality of reported small-molecule fluorescent probes, especially in the chemistry literature. We specifically highlight the advantages of reversibility in sensing reactions and its applications in ratiometric probe design for quantitative measurements in living cells. In addition, we compare the advantages and disadvantages of small-molecule probes and protein-based probes. The low physiological relevant concentrations of most ROS/RNS call for new sensing reactions with better selectivity, kinetics, and reversibility; fluorophores with high quantum yield, wide wavelength coverage, and Stokes shifts; and structural design with good aqueous solubility, membrane permeability, low protein interference, and organelle specificity. Antioxid. Redox Signal. 00, 000-000.

  15. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  16. Algorithms for Disconnected Diagrams in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos

    2016-11-01

    Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.

  17. Solar Power System Design for the Solar Probe+ Mission

    NASA Technical Reports Server (NTRS)

    Landis, Geoffrey A.; Schmitz, Paul C.; Kinnison, James; Fraeman, Martin; Roufberg, Lew; Vernon, Steve; Wirzburger, Melissa

    2008-01-01

    Solar Probe+ is an ambitious mission proposed to the solar corona, designed to make a perihelion approach of 9 solar radii from the surface of the sun. The high temperature, high solar flux environment makes this mission a significant challenge for power system design. This paper summarizes the power system conceptual design for the solar probe mission. Power supplies considered included nuclear, solar thermoelectric generation, solar dynamic generation using Stirling engines, and solar photovoltaic generation. The solar probe mission ranges from a starting distance from the sun of 1 AU, to a minimum distance of about 9.5 solar radii, or 0.044 AU, from the center of the sun. During the mission, the solar intensity ranges from one to about 510 times AM0. This requires power systems that can operate over nearly three orders of magnitude of incident intensity.

  18. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  19. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  20. Sounding rocket research Aries/Firewheel, series 22, issue 15

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1981-01-01

    Rocket experiments in ionospheric particle and field research flow in seven programs during the last decade are summarized. Experimental techniques were developed and are discussed including the double-probe field technique. The auroral zone, polar cap, and equatorial spread F were studied.

  1. Scientists Probe Pesticide Dynamics

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1974

    1974-01-01

    Summarizes discussions of a symposium on pesticide environmental dynamics with emphases upon pesticide transport processes, environmental reactions, and partitioning in air, soil, water and living organisms. Indicates that the goal is to attain knowledge enough to predict pesticide behavior and describe pesticide distribution with models and…

  2. An Ultrasonographic Periodontal Probe

    NASA Astrophysics Data System (ADS)

    Bertoncini, C. A.; Hinders, M. K.

    2010-02-01

    Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.

  3. 2nd International Planetary Probe Workshop

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Martinez, Ed; Arcadi, Marla

    2005-01-01

    Included are presentations from the 2nd International Planetary Probe Workshop. The purpose of the second workshop was to continue to unite the community of planetary scientists, spacecraft engineers and mission designers and planners; whose expertise, experience and interests are in the areas of entry probe trajectory and attitude determination, and the aerodynamics/aerothermodynamics of planetary entry vehicles. Mars lander missions and the first probe mission to Titan made 2004 an exciting year for planetary exploration. The Workshop addressed entry probe science, engineering challenges, mission design and instruments, along with the challenges of reconstruction of the entry, descent and landing or the aerocapture phases. Topics addressed included methods, technologies, and algorithms currently employed; techniques and results from the rich history of entry probe science such as PAET, Venera/Vega, Pioneer Venus, Viking, Galileo, Mars Pathfinder and Mars MER; upcoming missions such as the imminent entry of Huygens and future Mars entry probes; and new and novel instrumentation and methodologies.

  4. Determining the distribution of probes between different subcellular locations through automated unmixing of subcellular patterns.

    PubMed

    Peng, Tao; Bonamy, Ghislain M C; Glory-Afshar, Estelle; Rines, Daniel R; Chanda, Sumit K; Murphy, Robert F

    2010-02-16

    Many proteins or other biological macromolecules are localized to more than one subcellular structure. The fraction of a protein in different cellular compartments is often measured by colocalization with organelle-specific fluorescent markers, requiring availability of fluorescent probes for each compartment and acquisition of images for each in conjunction with the macromolecule of interest. Alternatively, tailored algorithms allow finding particular regions in images and quantifying the amount of fluorescence they contain. Unfortunately, this approach requires extensive hand-tuning of algorithms and is often cell type-dependent. Here we describe a machine-learning approach for estimating the amount of fluorescent signal in different subcellular compartments without hand tuning, requiring only the acquisition of separate training images of markers for each compartment. In testing on images of cells stained with mixtures of probes for different organelles, we achieved a 93% correlation between estimated and expected amounts of probes in each compartment. We also demonstrated that the method can be used to quantify drug-dependent protein translocations. The method enables automated and unbiased determination of the distributions of protein across cellular compartments, and will significantly improve imaging-based high-throughput assays and facilitate proteome-scale localization efforts.

  5. Lung vasculature imaging using speckle variance optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen

    2012-02-01

    Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.

  6. Nanoparticles for Biomedical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nune, Satish K.; Gunda, Padmaja; Thallapally, Praveen K.

    2009-11-01

    Background: Synthetic nanoparticles are emerging as versatile tools in biomedical applications, particularly in the area of biomedical imaging. Nanoparticles 1 to 100 nm in diameter possess dimensions comparable to biological functional units. Diverse surface chemistries, unique magnetic properties, tunable absorption and emission properties, and recent advances in the synthesis and engineering of various nanoparticles suggest their potential as probes for early detection of diseases such as cancer. Surface functionalization has further expanded the potential of nanoparticles as probes for molecular imaging. Objective: To summarize emerging research of nanoparticles for biomedical imaging with increased selectivity and reduced non-specific uptake with increasedmore » spatial resolution containing stabilizers conjugated with targeting ligands. Methods: This review summarizes recent technological advances in the synthesis of various nanoparticle probes, and surveys methods to improve the targeting of nanoparticles for their applications in biomedical imaging. Conclusion: Structural design of nanomaterials for biomedical imaging continues to expand and diversify. Synthetic methods have aimed to control the size and surface characteristics of nanoparticles to control distribution, half-life and elimination. Although molecular imaging applications using nanoparticles are advancing into clinical applications, challenges such as storage stability and long-term toxicology should continue to be addressed. Keywords: nanoparticle synthesis, surface modification, targeting, molecular imaging, and biomedical imaging.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacificmore » Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing techniques. Bootstrap techniques have been developed to estimate confidence intervals for the electromechanical modes from field measured data. Results were obtained using injected signal data provided by BPA. A new probing signal was designed that puts more strength into the signal for a given maximum peak to peak swing. Further simulations were conducted on a model based on measured data and with the modifications of the 19-machine simulation model. Montana Tech researchers participated in two primary activities: (1) continued development of the 19-machine simulation test system to include a DC line; and (2) extensive simulation analysis of the various system identification algorithms and bootstrap techniques using the 19 machine model. Researchers at the University of Alaska-Fairbanks focused on the development and testing of adaptive filter algorithms for mode estimation using data generated from simulation models and on data provided in collaboration with BPA and PNNL. There efforts consist of pre-processing field data, testing and refining adaptive filter techniques (specifically the Least Mean Squares (LMS), the Adaptive Step-size LMS (ASLMS), and Error Tracking (ET) algorithms). They also improved convergence of the adaptive algorithms by using an initial estimate from block processing AR method to initialize the weight vector for LMS. Extensive testing was performed on simulated data from the 19 machine model. This project was also extensively involved in the WECC (Western Electricity Coordinating Council) system wide tests carried out in 2005 and 2006. These tests involved injecting known probing signals into the western power grid. One of the primary goals of these tests was the reliable estimation of electromechanical mode properties from measured PMU data. Applied to the system were three types of probing inputs: (1) activation of the Chief Joseph Dynamic Brake, (2) mid-level probing at the Pacific DC Intertie (PDCI), and (3) low-level probing on the PDCI. The Chief Joseph Dynamic Brake is a 1400 MW disturbance to the system and is injected for a half of a second. For the mid and low-level probing, the Celilo terminal of the PDCI is modulated with a known probing signal. Similar but less extensive tests were conducted in June of 2000. The low-level probing signals were designed at the University of Wyoming. A number of important design factors are considered. The designed low-level probing signal used in the tests is a multi-sine signal. Its frequency content is focused in the range of the inter-area electromechanical modes. The most frequently used of these low-level multi-sine signals had a period of over two minutes, a root-mean-square (rms) value of 14 MW, and a peak magnitude of 20 MW. Up to 15 cycles of this probing signal were injected into the system resulting in a processing gain of 15. The resulting measured response at points throughout the system was not much larger than the ambient noise present in the measurements.« less

  8. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    PubMed

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  9. The Rosetta Stones of Mars — Should Meteorites be Considered as Samples of Opportunity for Mars Sample Return?

    NASA Astrophysics Data System (ADS)

    Tait, A. W.; Schröder, C.; Ashley, J. W.; Velbel, M. A.; Boston, P. J.; Carrier, B. L.; Cohen, B. A.; Bland, P. A.

    2018-04-01

    We summarize insights about Mars gained from investigating meteorites found on Mars. Certain types of meteorites can be considered standard probes inserted into the martian environment. Should they be considered for Mars Sample Return?

  10. Data-directed RNA secondary structure prediction using probabilistic modeling

    PubMed Central

    Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon

    2016-01-01

    Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549

  11. Comparative study of classification algorithms for immunosignaturing data

    PubMed Central

    2012-01-01

    Background High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data. Results We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy. Conclusions ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties. PMID:22720696

  12. New Algorithm to Enable Construction and Display of 3D Structures from Scanning Probe Microscopy Images Acquired Layer-by-Layer.

    PubMed

    Deng, William Nanqiao; Wang, Shuo; Ventrici de Souza, Joao; Kuhl, Tonya L; Liu, Gang-Yu

    2018-06-25

    Scanning probe microscopy (SPM), such as atomic force microscopy (AFM), is widely known for high-resolution imaging of surface structures and nanolithography in two dimensions (2D), providing important physical insights into surface science and material science. This work reports a new algorithm to enable construction and display of layer-by-layer 3D structures from SPM images. The algorithm enables alignment of SPM images acquired during layer-by-layer deposition and removal of redundant features and faithfully constructs the deposited 3D structures. The display uses a "see-through" strategy to enable the structure of each layer to be visible. The results demonstrate high spatial accuracy as well as algorithm versatility; users can set parameters for reconstruction and display as per image quality and research needs. To the best of our knowledge, this method represents the first report to enable SPM technology for 3D imaging construction and display. The detailed algorithm is provided to facilitate usage of the same approach in any SPM software. These new capabilities support wide applications of SPM that require 3D image reconstruction and display, such as 3D nanoprinting and 3D additive and subtractive manufacturing and imaging.

  13. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    NASA Astrophysics Data System (ADS)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  14. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  15. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  16. Carbon phenolic heat shields for Jupiter/Saturn/Uranus entry probes

    NASA Technical Reports Server (NTRS)

    Mezines, S.

    1974-01-01

    Carbon phenolic heat shield technology is reviewed. Heat shield results from the outer planetary probe mission studies are summarized along with results of plasma jet testing of carbon phenolic conducted in a ten megawatt facility. Missile flight data is applied to planetary entry conditions. A carbon phenolic heat shield material is utilized and tailored to accommodate each of the probe missions. An integral heat shield approach is selected over in order to eliminate a high temperature interface problem and permit direct bonding of the carbon phenolic to the structural honeycomb sandwich. The sandwich is filled with a very fine powder to minimize degradation of its insulation properties by the high conductive hydrogen/helium gases during the long atmospheric descent phase.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Aaron A.; Chamberlin, Clyde E.; Edwards, Matthew K.

    This section of the Joint summary technical letter report (TLR) describes work conducted at the Pacific Northwest National Laboratory (PNNL) during FY 2016 (FY16) on the under-sodium viewing (USV) PNNL project 58745, work package AT-16PN230102. This section of the TLR satisfies PNNL’s M3AT-16PN2301025 milestone and is focused on summarizing the design, development, and evaluation of two different phased-array ultrasonic testing (PA-UT) probe designs—a two-dimensional (2D) matrix phased-array probe, and two one-dimensional (1D) linear array probes, referred to as serial number 4 (SN4) engineering test units (ETUs). The 2D probe is a pulse-echo (PE), 32×2, 64-element matrix phased-array ETU. The 1Dmore » probes are 32×1 element linear array ETUs. This TLR also provides the results from a performance demonstration (PD) of in-sodium target detection trials at 260°C using both probe designs. This effort continues the iterative evolution supporting the longer term goal of producing and demonstrating a pre-manufacturing prototype ultrasonic probe that possesses the fundamental performance characteristics necessary to enable the development of a high-temperature sodium-cooled fast reactor (SFR) inspection system for in-sodium detection and imaging.« less

  18. Detection of biological threats. A challenge for directed molecular evolution.

    PubMed

    Petrenko, Valery A; Sorokulova, Iryna B

    2004-08-01

    The probe technique originated from early attempts of Anton van Leeuwenhoek to contrast microorganisms under the microscope using plant juices, successful staining of tubercle bacilli with synthetic dyes by Paul Ehrlich and discovery of a stain for differentiation of gram-positive and gram-negative bacteria by Hans Christian Gram. The technique relies on the principle that pathogens have unique structural features, which can be recognized by specifically labeled organic molecules. A hundred years of extensive screening efforts led to discovery of a limited assortment of organic probes that are used for identification and differentiation of bacteria. A new challenge--continuous monitoring of biological threats--requires long lasting molecular probes capable of tight specific binding of pathogens in unfavorable conditions. To respond to the challenge, probe technology is being revolutionized by utilizing methods of combinatorial chemistry, phage display and directed molecular evolution. This review describes how molecular evolution methods are applied for development of peptide, antibody and phage probes, and summarizes the author's own data on development of landscape phage probes against Salmonella typhimurium. The performance of the probes in detection of Salmonella is illustrated by a precipitation test, enzyme-linked immunosorbent assay (ELISA), fluorescence-activated cell sorting (FACS) and fluorescent, optical and electron microscopy.

  19. Design of thermocouple probes for measurement of rocket exhaust plume temperatures

    NASA Astrophysics Data System (ADS)

    Warren, R. C.

    1994-06-01

    This paper summarizes a literature survey on high temperature measurement and describes the design of probes used in plume measurements. There were no cases reported of measurements in extreme environments such as exist in solid rocket exhausts, but there were a number of thermocouple designs which had been used under less extreme conditions and which could be further developed. Tungsten-rhenium(W-Rh) thermocouples had the combined properties of strength at high temperatures, high thermoelectric emf, and resistance to chemical attack. A shielded probe was required, both to protect the thermocouple junction, and to minimise radiative heat losses. After some experimentation, a twin shielded design made from molybdenum gave acceptable results. Corrections for thermal conduction losses were made based on a method obtained from the literature. Radiation losses were minimized with this probe design, and corrections for these losses were too complex and unreliable to be included.

  20. A review of data fusion techniques.

    PubMed

    Castanedo, Federico

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.

  1. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  2. Cat swarm optimization based evolutionary framework for multi document summarization

    NASA Astrophysics Data System (ADS)

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  3. EPA COMPARES THREE SOIL-GAS SAMPLING SYSTEMS FOR VAPOR INTRUSION INVESTIGATIONS

    EPA Science Inventory

    This newsletter article summarizes the finding of "U.S. Environmental Protection Agency, Comparison of Geoprobe PRT, AMS GVP Soil-Gas Sampling Systems with Dedicated Vapor Probes in Sandy Soils at the Raymark Superfund Site, EPA/600/R-06/11, November 2006. "

  4. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  5. Molecular diagnosis of α-thalassemia in a multiethnic population.

    PubMed

    Gilad, Oded; Shemer, Orna Steinberg; Dgany, Orly; Krasnov, Tanya; Nevo, Michal; Noy-Lotan, Sharon; Rabinowicz, Ron; Amitai, Nofar; Ben-Dor, Shifra; Yaniv, Isaac; Yacobovich, Joanne; Tamary, Hannah

    2017-06-01

    α-Thalassemia, one of the most common genetic diseases, is caused by deletions or point mutations affecting one to four α-globin genes. Molecular diagnosis is important to prevent the most severe forms of the disease. However, the diagnosis of α-thalassemia is complex due to a high variability of the genetic defects involved, with over 250 described mutations. We summarize herein the findings of genetic analyses of DNA samples referred to our laboratory for the molecular diagnosis of α-thalassemia, along with a detailed clinical description. We utilized a diagnostic algorithm including Gap-PCR, to detect known deletions, followed by sequencing of the α-globin gene, to identify known and novel point mutations, and multiplex ligation-dependent probe amplification (MLPA) for the diagnosis of rare or novel deletions. α-Thalassemia was diagnosed in 662 of 975 samples referred to our laboratory. Most commonly found were deletions (75.3%, including two novel deletions previously described by us); point mutations comprised 25.4% of the cases, including five novel mutations. Our population included mostly Jews (of Ashkenazi and Sephardic origin) and Muslim Arabs, who presented with a higher rate of point mutations and hemoglobin H disease. Overall, we detected 53 different genotype combinations causing a spectrum of clinical phenotypes, from asymptomatic to severe anemia. Our work constitutes the largest group of patients with α-thalassemia originating in the Mediterranean whose clinical characteristics and molecular basis have been determined. We suggest a diagnostic algorithm that leads to an accurate molecular diagnosis in multiethnic populations. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  7. A methodology for evaluating detection performance of ultrasonic array imaging algorithms for coarse-grained materials.

    PubMed

    Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S

    2014-12-01

    Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.

  8. NASA/USRA university advanced design program at the University of Illinois for the 1989-1990 academic year

    NASA Technical Reports Server (NTRS)

    Koepke, Andrew; Sivier, Kenneth

    1990-01-01

    The University's design project, the Unmanned Probe to Pluto, is reviewed. Forty-two students divided into seven groups, participated in the program. A presentation, prepared by three students and a graduate teaching assistant for the program's summer conference, summarized the project results.

  9. Wood Condition Assessment Manual: Second Edition

    Treesearch

    Robert J. Ross; Robert H. White

    2014-01-01

    This report summarizes information on condition assessment of in-service wood, including visual inspection of wood and timbers, use of ultrasound and probing/boring techniques for inspection, and assessment of wood and timbers that have been exposed to fire. The report also includes information on assigning allowable design values for in-service wood.

  10. Figure summarizer browser extensions for PubMed Central

    PubMed Central

    Agarwal, Shashank; Yu, Hong

    2011-01-01

    Summary: Figures in biomedical articles present visual evidence for research facts and help readers understand the article better. However, when figures are taken out of context, it is difficult to understand their content. We developed a summarization algorithm to summarize the content of figures and used it in our figure search engine (http://figuresearch.askhermes.org/). In this article, we report on the development of web browser extensions for Mozilla Firefox, Google Chrome and Apple Safari to display summaries for figures in PubMed Central and NCBI Images. Availability: The extensions can be downloaded from http://figuresearch.askhermes.org/articlesearch/extensions.php. Contact: agarwal@uwm.edu PMID:21493658

  11. Methodology Development for the Reconstruction of the ESA Huygens Probe Entry and Descent Trajectory

    NASA Astrophysics Data System (ADS)

    Kazeminejad, B.

    2005-01-01

    The European Space Agency's (ESA) Huygens probe performed a successful entry and descent into Titan's atmosphere on January 14, 2005, and landed safely on the satellite's surface. A methodology was developed, implemented, and tested to reconstruct the Huygens probe trajectory from its various science and engineering measurements, which were performed during the probe's entry and descent to the surface of Titan, Saturn's largest moon. The probe trajectory reconstruction is an essential effort that has to be done as early as possible in the post-flight data analysis phase as it guarantees a correct and consistent interpretation of all the experiment data and furthermore provides a reference set of data for "ground-truthing" orbiter remote sensing measurements. The entry trajectory is reconstructed from the measured probe aerodynamic drag force, which also provides a means to derive the upper atmospheric properties like density, pressure, and temperature. The descent phase reconstruction is based upon a combination of various atmospheric measurements such as pressure, temperature, composition, speed of sound, and wind speed. A significant amount of effort was spent to outline and implement a least-squares trajectory estimation algorithm that provides a means to match the entry and descent trajectory portions in case of discontinuity. An extensive test campaign of the algorithm is presented which used the Huygens Synthetic Dataset (HSDS) developed by the Huygens Project Scientist Team at ESA/ESTEC as a test bed. This dataset comprises the simulated sensor output (and the corresponding measurement noise and uncertainty) of all the relevant probe instruments. The test campaign clearly showed that the proposed methodology is capable of utilizing all the relevant probe data, and will provide the best estimate of the probe trajectory once real instrument measurements from the actual probe mission are available. As a further test case using actual flight data the NASA Mars Pathfinder entry and descent trajectory and the space craft attitude was reconstructed from the 3-axis accelerometer measurements which are archived on the Planetary Data System. The results are consistent with previously published reconstruction efforts.

  12. Work on Planetary Atmospheres and Planetary Atmosphere Probes

    NASA Technical Reports Server (NTRS)

    Lester, Peter

    1999-01-01

    A summary final report of work accomplished is presented. Work was performed in the following areas: (1) Galileo Probe science analysis, (2) Galileo probe Atmosphere Structure Instrument, (3) Mars Pathfinder Atmosphere Structure/Meteorology instrument, (4) Mars Pathfinder data analysis, (5) Science Definition for future Mars missions, (6) Viking Lander data analysis, (7) winds in Mars atmosphere Venus atmospheric dynamics, (8) Pioneer Venus Probe data analysis, (9) Pioneer Venus anomaly analysis, (10) Discovery Venus Probe Titan probe instrument design, and (11) laboratory studies of Titan probe impact phenomena. The work has resulted in more than 10 articles published in archive journals, 2 encyclopedia articles, and many working papers. This final report is organized around the four planets on which there was activity, Jupiter, Mars, Venus, and Titan, with a closing section on Miscellaneous Activities. A major objective was to complete the fabrication, test, and evaluation of the atmosphere structure experiment on the Galileo probe, and to receive, analyze and interpret data received from the spacecraft. The instrument was launched on April 14, 1989. Calibration data were taken for all experiment sensors. The data were analyzed, fitted with algorithms, and summarized in a calibration report for use in analyzing and interpreting data returned from Jupiter's atmosphere. The sensors included were the primary science pressure, temperature and acceleration sensors, and the supporting engineering temperature sensors. Computer programs were written to decode the Experiment Data Record and convert the digital numbers to physical quantities, i.e., temperatures, pressures, and accelerations. The project office agreed to obtain telemetry of checkout data from the probe. Work to extend programs written for use on the Pioneer Venus project included: (1) massive heat shield ablation leading to important mass loss during entry; and (2) rapid planet rotation, which introduced terms of motion not needed on Venus. When the Galileo Probe encountered Jupiter, analysis and interpretation of data commenced. The early contributions of the experiment were to define (1) the basic structure of the deep atmosphere, (2) the stability of the atmosphere, (3) the upper atmospheric profiles of density, pressure, and temperature. The next major task in the Galileo Probe project was to refine, verify and extend the analysis of the data. It was the verified, and corrected data, which indicated a dry abiabatic atmosphere within measurement accuracy. Temperature in the thermosphere was measured at 900 K. Participation in the Mars atmospheric research included: (1) work as a team member of the Mars Atmosphere Working Group, (2) contribution to the Mars Exobiology Instrument workshop, (3) asssistance in planning the Mars global network and (4) assitance in planning the Soviet-French Mars mission in 1994. This included a return to the Viking Lander parachute data to refine and improve the definition of winds between 1.5 and 4 kilometer altitude at the two entry sites. The variability of the structure of Mars atmosphere was addressed, which is known to vary with season, latitude, hemisphere and dust loading of the atmosphere. This led to work on the Pathfinder project. The probe had a deployable meteorology mast that had three temperature sensors, and a wind sensor at the tip of the mast. Work on the Titan atmospheric probe was also accomplished. This included developing an experiment proposal to the European Space Agency (ESA), which was not selected. However, as an advisor in the design and preparation of the selected experiment the researcher interacted with scientist on the Huygens Probe Atmosphere Structure Experiment. The researcher also participated in the planning for the Venus Chemical Probe. The science objectives of the probe were to resolve unanswered questions concerning the minor species chemistry of Venus' atmosphere that control cloud formation, greenhouse effectiveness, and the thermal structure. The researcher also reviewed problems with the Pioneer Venus Probe, that caused anomalies which occurred on the Probes at and below 12.5 km level of the Venus' atmosphere. He convened and participated in a workshop that concluded the most likely hardware cause was insulation failure in the electrical harness outside the Probes' pressure vessels. It was discovered that the shrink tubing material failed at 600K. This failure could explain the anomalies experienced by the probes. The descent data of the Pioneer probes, and the Soviet Vega Lander was analyzed to evaluate the presence of small scale gravity waves in and below the Venus cloud layer.

  13. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    NASA Astrophysics Data System (ADS)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  14. Toward real-time quantification of fluorescence molecular probes using target/background ratio for guiding biopsy and endoscopic therapy of esophageal neoplasia.

    PubMed

    Jiang, Yang; Gong, Yuanzheng; Rubenstein, Joel H; Wang, Thomas D; Seibel, Eric J

    2017-04-01

    Multimodal endoscopy using fluorescence molecular probes is a promising method of surveying the entire esophagus to detect cancer progression. Using the fluorescence ratio of a target compared to a surrounding background, a quantitative value is diagnostic for progression from Barrett's esophagus to high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC). However, current quantification of fluorescent images is done only after the endoscopic procedure. We developed a Chan-Vese-based algorithm to segment fluorescence targets, and subsequent morphological operations to generate background, thus calculating target/background (T/B) ratios, potentially to provide real-time guidance for biopsy and endoscopic therapy. With an initial processing speed of 2 fps and by calculating the T/B ratio for each frame, our method provides quasireal-time quantification of the molecular probe labeling to the endoscopist. Furthermore, an automatic computer-aided diagnosis algorithm can be applied to the recorded endoscopic video, and the overall T/B ratio is calculated for each patient. The receiver operating characteristic curve was employed to determine the threshold for classification of HGD/EAC using leave-one-out cross-validation. With 92% sensitivity and 75% specificity to classify HGD/EAC, our automatic algorithm shows promising results for a surveillance procedure to help manage esophageal cancer and other cancers inspected by endoscopy.

  15. A Review of Data Fusion Techniques

    PubMed Central

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502

  16. Gamma Ray Observatory (GRO) OBC attitude error analysis

    NASA Technical Reports Server (NTRS)

    Harman, R. R.

    1990-01-01

    This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

  17. Expanding probe repertoire and improving reproducibility in human genomic hybridization

    PubMed Central

    Dorman, Stephanie N.; Shirley, Ben C.; Knoll, Joan H. M.; Rogan, Peter K.

    2013-01-01

    Diagnostic DNA hybridization relies on probes composed of single copy (sc) genomic sequences. Sc sequences in probe design ensure high specificity and avoid cross-hybridization to other regions of the genome, which could lead to ambiguous results that are difficult to interpret. We examine how the distribution and composition of repetitive sequences in the genome affects sc probe performance. A divide and conquer algorithm was implemented to design sc probes. With this approach, sc probes can include divergent repetitive elements, which hybridize to unique genomic targets under higher stringency experimental conditions. Genome-wide custom probe sets were created for fluorescent in situ hybridization (FISH) and microarray genomic hybridization. The scFISH probes were developed for detection of copy number changes within small tumour suppressor genes and oncogenes. The microarrays demonstrated increased reproducibility by eliminating cross-hybridization to repetitive sequences adjacent to probe targets. The genome-wide microarrays exhibited lower median coefficients of variation (17.8%) for two HapMap family trios. The coefficients of variations of commercial probes within 300 nt of a repetitive element were 48.3% higher than the nearest custom probe. Furthermore, the custom microarray called a chromosome 15q11.2q13 deletion more consistently. This method for sc probe design increases probe coverage for FISH and lowers variability in genomic microarrays. PMID:23376933

  18. Pivot methods for global optimization

    NASA Astrophysics Data System (ADS)

    Stanton, Aaron Fletcher

    A new algorithm is presented for the location of the global minimum of a multiple minima problem. It begins with a series of randomly placed probes in phase space, and then uses an iterative redistribution of the worst probes into better regions of phase space until a chosen convergence criterion is fulfilled. The method quickly converges, does not require derivatives, and is resistant to becoming trapped in local minima. Comparison of this algorithm with others using a standard test suite demonstrates that the number of function calls has been decreased conservatively by a factor of about three with the same degrees of accuracy. Two major variations of the method are presented, differing primarily in the method of choosing the probes that act as the basis for the new probes. The first variation, termed the lowest energy pivot method, ranks all probes by their energy and keeps the best probes. The probes being discarded select from those being kept as the basis for the new cycle. In the second variation, the nearest neighbor pivot method, all probes are paired with their nearest neighbor. The member of each pair with the higher energy is relocated in the vicinity of its neighbor. Both methods are tested against a standard test suite of functions to determine their relative efficiency, and the nearest neighbor pivot method is found to be the more efficient. A series of Lennard-Jones clusters is optimized with the nearest neighbor method, and a scaling law is found for cpu time versus the number of particles in the system. The two methods are then compared more explicitly, and finally a study in the use of the pivot method for solving the Schroedinger equation is presented. The nearest neighbor method is found to be able to solve the ground state of the quantum harmonic oscillator from a pure random initialization of the wavefunction.

  19. Selection of optimal oligonucleotide probes for microarrays usingmultiple criteria, global alignment and parameter estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xingyuan; He, Zhili; Zhou, Jizhong

    2005-10-30

    The oligonucleotide specificity for microarray hybridizationcan be predicted by its sequence identity to non-targets, continuousstretch to non-targets, and/or binding free energy to non-targets. Mostcurrently available programs only use one or two of these criteria, whichmay choose 'false' specific oligonucleotides or miss 'true' optimalprobes in a considerable proportion. We have developed a software tool,called CommOligo using new algorithms and all three criteria forselection of optimal oligonucleotide probes. A series of filters,including sequence identity, free energy, continuous stretch, GC content,self-annealing, distance to the 3'-untranslated region (3'-UTR) andmelting temperature (Tm), are used to check each possibleoligonucleotide. A sequence identity is calculated based onmore » gapped globalalignments. A traversal algorithm is used to generate alignments for freeenergy calculation. The optimal Tm interval is determined based on probecandidates that have passed all other filters. Final probes are pickedusing a combination of user-configurable piece-wise linear functions andan iterative process. The thresholds for identity, stretch and freeenergy filters are automatically determined from experimental data by anaccessory software tool, CommOligo_PE (CommOligo Parameter Estimator).The program was used to design probes for both whole-genome and highlyhomologous sequence data. CommOligo and CommOligo_PE are freely availableto academic users upon request.« less

  20. Rapid biosensing tools for cancer biomarkers.

    PubMed

    Ranjan, Rajeev; Esimbekova, Elena N; Kratasyuk, Valentina A

    2017-01-15

    The present review critically discusses the latest developments in the field of smart diagnostic systems for cancer biomarkers. A wide coverage of recent biosensing approaches involving aptamers, enzymes, DNA probes, fluorescent probes, interacting proteins and antibodies in vicinity to transducers such as electrochemical, optical and piezoelectric is presented. Recent advanced developments in biosensing approaches for cancer biomarker owes much credit to functionalized nanomaterials due to their unique opto-electronic properties and enhanced surface to volume ratio. Biosensing methods for a plenty of cancer biomarkers has been summarized emphasizing the key principles involved. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Underwater probing with laser radar

    NASA Technical Reports Server (NTRS)

    Carswell, A. I.; Sizgoric, S.

    1975-01-01

    Recent advances in laser and electro optics technology have greatly enhanced the feasibility of active optical probing techniques aimed at the remote sensing of water parameters. This paper describes a LIDAR (laser radar) that has been designed and constructed for underwater probing. The influence of the optical properties of water on the general design parameters of a LIDAR system is considered. Discussion of the specific details in the choice of the constructed LIDAR is given. This system utilizes a cavity dumped argon ion laser transmitter capable of 50 watt peak powers, 10 nanosecond pulses and megahertz pulse repetition rates at 10 different wavelengths in the blue green region of the spectrum. The performance of the system, in proving various types of water, is demonstrated by summarizing the results of initial laboratory and field experiments.

  2. Probing numerical Laplace inversion methods for two and three-site molecular exchange between interconnected pore structures.

    PubMed

    Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H

    2018-01-01

    Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Design of primers and probes for quantitative real-time PCR methods.

    PubMed

    Rodríguez, Alicia; Rodríguez, Mar; Córdoba, Juan J; Andrade, María J

    2015-01-01

    Design of primers and probes is one of the most crucial factors affecting the success and quality of quantitative real-time PCR (qPCR) analyses, since an accurate and reliable quantification depends on using efficient primers and probes. Design of primers and probes should meet several criteria to find potential primers and probes for specific qPCR assays. The formation of primer-dimers and other non-specific products should be avoided or reduced. This factor is especially important when designing primers for SYBR(®) Green protocols but also in designing probes to ensure specificity of the developed qPCR protocol. To design primers and probes for qPCR, multiple software programs and websites are available being numerous of them free. These tools often consider the default requirements for primers and probes, although new research advances in primer and probe design should be progressively added to different algorithm programs. After a proper design, a precise validation of the primers and probes is necessary. Specific consideration should be taken into account when designing primers and probes for multiplex qPCR and reverse transcription qPCR (RT-qPCR). This chapter provides guidelines for the design of suitable primers and probes and their subsequent validation through the development of singlex qPCR, multiplex qPCR, and RT-qPCR protocols.

  4. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  5. Filetype Identification Using Long, Summarized N-Grams

    DTIC Science & Technology

    2011-03-01

    compressed or encrypted data . If the algorithm used to compress or encrypt the data can be determined, then it is frequently possible to uncom- press...fragments. His implementation utilized the bzip2 library to compress the file fragments. The bzip2 library is based off the Lempel - Ziv -Markov chain... algorithm that uses a dictionary compression scheme to remove repeating data patterns within a set of data . The removed patterns are listed within the

  6. Characterizing X-ray Attenuation of Containerized Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birrer, N.; Divin, C.; Glenn, S.

    X-ray inspection systems can be used to detect radiological and nuclear threats in imported cargo. In order to better understand performance of these systems, the attenuation characteristics of imported cargo need to be determined. This project focused on developing image processing algorithms for segmenting cargo and using x-ray attenuation to quantify equivalent steel thickness to determine cargo density. These algorithms were applied to over 450 cargo radiographs. The results are summarized in this report.

  7. The deep space network. [tracking and communication support for space probes

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The objectives, functions, and organization of the deep space network are summarized. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. Interface support for the Mariner Venus Mercury 1973 flight and Pioneer 10 and 11 missions is included.

  8. Research and technology 1989

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Marshall Space Flight Center annual report summarizes their advanced studies, research programs, and technological developments. Areas covered include: transportation systems; space systems such as Gravity Probe-B and Gamma Ray Imaging Telescope; data systems; microgravity science; astronomy and astrophysics; solar, magnetospheric, and atomic physics; aeronomy; propulsion; materials and processes; structures and dynamics; automated systems; space systems; and avionics.

  9. Climatological Characterization of Three-Dimensional Storm Structure from Operational Radar and Rain Gauge Data.

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Houze, Robert A., Jr.; Yuter, Sandra E.

    1995-09-01

    Three algorithms extract information on precipitation type, structure, and amount from operational radar and rain gauge data. Tests on one month of data from one site show that the algorithms perform accurately and provide products that characterize the essential features of the precipitation climatology. Input to the algorithms are the operationally executed volume scans of a radar and the data from a surrounding rain gauge network. The algorithms separate the radar echoes into convective and stratiform regions, statistically summarize the vertical structure of the radar echoes, and determine precipitation rates and amounts on high spatial resolution.The convective and stratiform regions are separated on the basis of the intensity and sharpness of the peaks of echo intensity. The peaks indicate the centers of the convective region. Precipitation not identified as convective is stratiform. This method avoids the problem of underestimating the stratiform precipitation. The separation criteria are applied in exactly the same way throughout the observational domain and the product generated by the algorithm can be compared directly to model output. An independent test of the algorithm on data for which high-resolution dual-Doppler observations are available shows that the convective stratiform separation algorithm is consistent with the physical definitions of convective and stratiform precipitation.The vertical structure algorithm presents the frequency distribution of radar reflectivity as a function of height and thus summarizes in a single plot the vertical structure of all the radar echoes observed during a month (or any other time period). Separate plots reveal the essential differences in structure between the convective and stratiform echoes.Tests yield similar results (within less than 10%) for monthly rain statistics regardless of the technique used for estimating the precipitation, as long as the radar reflectivity values are adjusted to agree with monthly rain gauge data. It makes little difference whether the adjustment is by monthly mean rates or percentiles. Further tests show that 1-h sampling is sufficient to obtain an accurate estimate of monthly rain statistics.

  10. Recent Advances in Inorganic Nanoparticle-Based NIR Luminescence Imaging: Semiconductor Nanoparticles and Lanthanide Nanoparticles.

    PubMed

    Kim, Dokyoon; Lee, Nohyun; Park, Yong Il; Hyeon, Taeghwan

    2017-01-18

    Several types of nanoparticle-based imaging probes have been developed to replace conventional luminescent probes. For luminescence imaging, near-infrared (NIR) probes are useful in that they allow deep tissue penetration and high spatial resolution as a result of reduced light absorption/scattering and negligible autofluorescence in biological media. They rely on either an anti-Stokes or a Stokes shift process to generate luminescence. For example, transition metal-doped semiconductor nanoparticles and lanthanide-doped inorganic nanoparticles have been demonstrated as anti-Stokes shift-based agents that absorb NIR light through two- or three-photon absorption process and upconversion process, respectively. On the other hand, quantum dots (QDs) and lanthanide-doped nanoparticles that emit in NIR-II range (∼1000 to ∼1350 nm) were suggested as promising Stokes shift-based imaging agents. In this topical review, we summarize and discuss the recent progress in the development of inorganic nanoparticle-based luminescence imaging probes working in NIR range.

  11. Frontiers of Big Bang cosmology and primordial nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Cheoun, Myung-Ki; Kajino, Toshitaka; Kusakabe, Motohiko; Yamazaki, Dai G.

    2012-11-01

    We summarize some current research on the formation and evolution of the universe and overview some of the key questions surrounding the the big bang. There are really only two observational cosmological probes of the physics of the early universe. Of those two, the only probe during the relevant radiation dominated epoch is the yield of light elements during the epoch of big bang nucleosynthesis. The synthesis of light elements occurs in the temperature regime from 108 to 1010 K and times of about 1 to 104 sec into the big bang. The other probe is the spectrum of temperature fluctuations in the CMB which (among other things) contains information of the first quantum fluctuations in the universe, along with details of the distribution and evolution of dark matter, baryonic matter and photons up to the surface of photon last scattering. Here, we emphasize the role of these probes in answering some key questions of the big bang and early universe cosmology.

  12. Nanohertz frequency determination for the gravity probe B high frequency superconducting quantum interference device signal.

    PubMed

    Salomon, M; Conklin, J W; Kozaczuk, J; Berberian, J E; Keiser, G M; Silbergleit, A S; Worden, P; Santiago, D I

    2011-12-01

    In this paper, we present a method to measure the frequency and the frequency change rate of a digital signal. This method consists of three consecutive algorithms: frequency interpolation, phase differencing, and a third algorithm specifically designed and tested by the authors. The succession of these three algorithms allowed a 5 parts in 10(10) resolution in frequency determination. The algorithm developed by the authors can be applied to a sampled scalar signal such that a model linking the harmonics of its main frequency to the underlying physical phenomenon is available. This method was developed in the framework of the gravity probe B (GP-B) mission. It was applied to the high frequency (HF) component of GP-B's superconducting quantum interference device signal, whose main frequency f(z) is close to the spin frequency of the gyroscopes used in the experiment. A 30 nHz resolution in signal frequency and a 0.1 pHz/s resolution in its decay rate were achieved out of a succession of 1.86 s-long stretches of signal sampled at 2200 Hz. This paper describes the underlying theory of the frequency measurement method as well as its application to GP-B's HF science signal.

  13. Vision-based system for the control and measurement of wastewater flow rate in sewer systems.

    PubMed

    Nguyen, L S; Schaeli, B; Sage, D; Kayal, S; Jeanbourquin, D; Barry, D A; Rossi, L

    2009-01-01

    Combined sewer overflows and stormwater discharges represent an important source of contamination to the environment. However, the harsh environment inside sewers and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. In the following, we present and evaluate an in situ system for the monitoring of water flow in sewers based on video images. This paper focuses on the measurement of the water level based on image-processing techniques. The developed image-based water level algorithms identify the wall/water interface from sewer images and measure its position with respect to real world coordinates. A web-based user interface and a 3-tier system architecture enable the remote configuration of the cameras and the image-processing algorithms. Images acquired and processed by our system were found to reliably measure water levels and thereby to provide crucial information leading to better understand particular hydraulic behaviors. In terms of robustness and accuracy, the water level algorithm provided equal or better results compared to traditional water level probes in three different in situ configurations.

  14. Swarm Intelligence in Text Document Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Potok, Thomas E

    2008-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior. The research field that attempts to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insect colonies is called Swarm Intelligence. Compared to the traditional algorithms, the swarm algorithms are usually flexible, robust, decentralized and self-organized. These characters make the swarm algorithms suitable for solving complex problems, such as document collection clustering. The major challenge of today's information society is being overwhelmed with information on any topic they are searching for. Fast and high-quality document clustering algorithms play an important role inmore » helping users to effectively navigate, summarize, and organize the overwhelmed information. In this chapter, we introduce three nature inspired swarm intelligence clustering approaches for document clustering analysis. These clustering algorithms use stochastic and heuristic principles discovered from observing bird flocks, fish schools and ant food forage.« less

  15. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  16. Surface topography acquisition method for double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry.

    PubMed

    Zhang, Tao; Gao, Feng; Jiang, Xiangqian

    2017-10-02

    This paper proposes an approach to measure double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry (DPWSI). The principle and mathematical model is discussed and the measurement system is calibrated with a combination of standard step-height samples for both probes vertical calibrations and a specially designed calibration artefact for building up the space coordinate relationship of the dual-probe measurement system. The topography of the specially designed artefact is acquired by combining the measurement results with white light scanning interferometer (WLSI) and scanning electron microscope (SEM) for reference. The relative location of the two probes is then determined with 3D registration algorithm. Experimental validation of the approach is provided and the results show that the method is able to measure double-sided near-right-angle structured surfaces with nanometer vertical resolution and micrometer lateral resolution.

  17. Rutgers University Subcontract B611610 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soundarajan, Sucheta; Eliassi-Rad, Tina; Gallagher, Brian

    Given an incomplete (i.e., partially-observed) network, which nodes should we actively probe in order to achieve the highest accuracy for a given network feature? For example, consider a cyber-network administrator who observes only a portion of the network at time t and wants to accurately identify the most important (e.g., highest PageRank) nodes in the complete network. She has a limited budget for probing the network. Of all the nodes she has observed, which should she probe in order to most accurately identify the important nodes? We propose a novel and scalable algorithm, MaxOutProbe, and evaluate it w.r.t. four networkmore » features (largest connected component, PageRank, core-periphery, and community detection), five network sampling strategies, and seven network datasets from different domains. Across a range of conditions, MaxOutProbe demonstrates consistently high performance relative to several baseline strategies« less

  18. Spectroscopic and Statistical Techniques for Information Recovery in Metabonomics and Metabolomics

    NASA Astrophysics Data System (ADS)

    Lindon, John C.; Nicholson, Jeremy K.

    2008-07-01

    Methods for generating and interpreting metabolic profiles based on nuclear magnetic resonance (NMR) spectroscopy, mass spectrometry (MS), and chemometric analysis methods are summarized and the relative strengths and weaknesses of NMR and chromatography-coupled MS approaches are discussed. Given that all data sets measured to date only probe subsets of complex metabolic profiles, we describe recent developments for enhanced information recovery from the resulting complex data sets, including integration of NMR- and MS-based metabonomic results and combination of metabonomic data with data from proteomics, transcriptomics, and genomics. We summarize the breadth of applications, highlight some current activities, discuss the issues relating to metabonomics, and identify future trends.

  19. Spectroscopic and statistical techniques for information recovery in metabonomics and metabolomics.

    PubMed

    Lindon, John C; Nicholson, Jeremy K

    2008-01-01

    Methods for generating and interpreting metabolic profiles based on nuclear magnetic resonance (NMR) spectroscopy, mass spectrometry (MS), and chemometric analysis methods are summarized and the relative strengths and weaknesses of NMR and chromatography-coupled MS approaches are discussed. Given that all data sets measured to date only probe subsets of complex metabolic profiles, we describe recent developments for enhanced information recovery from the resulting complex data sets, including integration of NMR- and MS-based metabonomic results and combination of metabonomic data with data from proteomics, transcriptomics, and genomics. We summarize the breadth of applications, highlight some current activities, discuss the issues relating to metabonomics, and identify future trends.

  20. ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs

    PubMed Central

    2011-01-01

    Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938

  1. "Hook"-calibration of GeneChip-microarrays: theory and algorithm.

    PubMed

    Binder, Hans; Preibisch, Stephan

    2008-08-29

    : The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.

  2. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  3. Automated bow shock and radiation belt edge identification methods and their application for Cluster, THEMIS/ARTEMIS and Van Allen Probes data

    NASA Astrophysics Data System (ADS)

    Facsko, Gabor; Sibeck, David; Balogh, Tamas; Kis, Arpad; Wesztergom, Viktor

    2017-04-01

    The bow shock and the outer rim of the outer radiation belt are detected automatically by our algorithm developed as a part of the Boundary Layer Identification Code Cluster Active Archive project. The radiation belt positions are determined from energized electron measurements working properly onboard all Cluster spacecraft. For bow shock identification we use magnetometer data and, when available, ion plasma instrument data. In addition, electrostatic wave instrument electron density, spacecraft potential measurements and wake indicator auxiliary data are also used so the events can be identified by all Cluster probes in highly redundant way, as the magnetometer and these instruments are still operational in all spacecraft. The capability and performance of the bow shock identification algorithm were tested using known bow shock crossing determined manually from January 29, 2002 to February 3,. The verification enabled 70% of the bow shock crossings to be identified automatically. The method shows high flexibility and it can be applied to observations from various spacecraft. Now these tools have been applied to Time History of Events and Macroscale Interactions during Substorms (THEMIS)/Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) magnetic field, plasma and spacecraft potential observations to identify bow shock crossings; and to Van Allen Probes supra-thermal electron observations to identify the edges of the radiation belt. The outcomes of the algorithms are checked manually and the parameters used to search for bow shock identification are refined.

  4. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    PubMed Central

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  5. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  6. Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1

    DTIC Science & Technology

    2013-05-14

    include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric

  7. Clinical tests of an ultrasonic periodontal probe

    NASA Astrophysics Data System (ADS)

    Hinders, Mark K.; Lynch, John E.; McCombs, Gayle B.

    2002-05-01

    A new ultrasonic periodontal probe has been developed that offers the potential for earlier detection of periodontal disease activity, non-invasive diagnosis, and greater reliability of measurement. A comparison study of the ultrasonic probe to both a manual probe, and a controlled-force probe was conducted to evaluate its clinical effectiveness. Twelve patients enrolled into this study. Two half-month examinations were conducted on each patient, scheduled one hour apart. A one-way analysis of variance was performed to compare the results for the three sets of probing depth measurements, followed by a repeated measures analysis to assess the reproducibility of the different probing techniques. These preliminary findings indicate that manual and ultrasonic probing measure different features of the pocket. Therefore, it is not obvious how the two depth measurements correspond to each other. However, both methods exhibited a similar tendency toward increasing pocket depths as Gingival Index scores increased. Based on the small sample size, further studies need to be conducted using a larger population of patients exhibiting a wider range of disease activity. In addition, studies that allow histological examination of the pocket after probing will help further evaluate the clinical effectiveness the ultrasonic probe. Future studies will also aid in the development of more effective automated feature recognition algorithms that convert the ultrasonic echoes into pocket depth readings.

  8. Characterizing nanoscale scanning probes using electron microscopy: A novel fixture and a practical guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Tevis D. B., E-mail: tjacobs@pitt.edu; Wabiszewski, Graham E.; Goodman, Alexander J.

    2016-01-15

    The nanoscale geometry of probe tips used for atomic force microscopy (AFM) measurements determines the lateral resolution, contributes to the strength of the tip-surface interaction, and can be a significant source of uncertainty in the quantitative analysis of results. While inverse imaging of the probe tip has been used successfully to determine probe tip geometry, direct observation of the tip profile using electron microscopy (EM) confers several advantages: it provides direct (rather than indirect) imaging, requires fewer algorithmic parameters, and does not require bringing the tip into contact with a sample. In the past, EM-based observation of the probe tipmore » has been achieved using ad hoc mounting methods that are constrained by low throughput, the risk of contamination, and repeatability issues. We report on a probe fixture designed for use in a commercial transmission electron microscope that enables repeatable mounting of multiple AFM probes as well as a reference grid for beam alignment. This communication describes the design, fabrication, and advantages of this probe fixture, including full technical drawings for machining. Further, best practices are discussed for repeatable, non-destructive probe imaging. Finally, examples of the fixture’s use are described, including characterization of common commercial AFM probes in their out-of-the-box condition.« less

  9. Characterizing nanoscale scanning probes using electron microscopy: A novel fixture and a practical guide

    NASA Astrophysics Data System (ADS)

    Jacobs, Tevis D. B.; Wabiszewski, Graham E.; Goodman, Alexander J.; Carpick, Robert W.

    2016-01-01

    The nanoscale geometry of probe tips used for atomic force microscopy (AFM) measurements determines the lateral resolution, contributes to the strength of the tip-surface interaction, and can be a significant source of uncertainty in the quantitative analysis of results. While inverse imaging of the probe tip has been used successfully to determine probe tip geometry, direct observation of the tip profile using electron microscopy (EM) confers several advantages: it provides direct (rather than indirect) imaging, requires fewer algorithmic parameters, and does not require bringing the tip into contact with a sample. In the past, EM-based observation of the probe tip has been achieved using ad hoc mounting methods that are constrained by low throughput, the risk of contamination, and repeatability issues. We report on a probe fixture designed for use in a commercial transmission electron microscope that enables repeatable mounting of multiple AFM probes as well as a reference grid for beam alignment. This communication describes the design, fabrication, and advantages of this probe fixture, including full technical drawings for machining. Further, best practices are discussed for repeatable, non-destructive probe imaging. Finally, examples of the fixture's use are described, including characterization of common commercial AFM probes in their out-of-the-box condition.

  10. Transverse vorticity measurements using an array of four hot-wire probes

    NASA Technical Reports Server (NTRS)

    Foss, J. F.; Klewickc, C. L.; Disimile, P. J.

    1986-01-01

    A comprehensive description of the technique used to obtain a time series of the quasi-instantaneous transverse vorticity from a four wire array of probes is presented. The algorithmic structure which supports the technique is described in detail and demonstration data, from a large plane shear layer, are presented to provide a specific utilization of the technique. Sensitivity calculations are provided which allow one contribution to the inherent uncertainty of the technique to be evaluated.

  11. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2012-05-01

    UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left

  12. An efficient probe of the cosmological CPT violation

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Wang, Yuting; Xia, Jun-Qing; Li, Mingzhe; Zhang, Xinmin

    2015-07-01

    We develop an efficient method based on the linear regression algorithm to probe the cosmological CPT violation using the CMB polarisation data. We validate this method using simulated CMB data and apply it to recent CMB observations. We find that a combined data sample of BICEP1 and BOOMERanG 2003 favours a nonzero isotropic rotation angle at 2.3σ confidence level, i.e., bar alpha=-3.3o±1.4o (68% CL) with systematics included.

  13. Adaptive optics compensation of orbital angular momentum beams with a modified Gerchberg-Saxton-based phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun

    2017-12-01

    Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.

  14. Turning Search into Knowledge Management.

    ERIC Educational Resources Information Center

    Kaufman, David

    2002-01-01

    Discussion of knowledge management for electronic data focuses on creating a high quality similarity ranking algorithm. Topics include similarity ranking and unstructured data management; searching, categorization, and summarization of documents; query evaluation; considering sentences in addition to keywords; and vector models. (LRW)

  15. Improved data reduction algorithm for the needle probe method applied to in-situ thermal conductivity measurements of lunar and planetary regoliths

    NASA Astrophysics Data System (ADS)

    Nagihara, Seiichi; Hedlund, Magnus; Zacny, Kris; Taylor, Patrick T.

    2014-03-01

    The needle probe method (also known as the ‘hot wire’ or ‘line heat source’ method) is widely used for in-situ thermal conductivity measurements on terrestrial soils and marine sediments. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be ~2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring a much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 min. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on a lunar regolith simulant, JSC-1A. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the volumetric heat capacity of the probe) can be measured and the other (the volumetric heat capacity of the regolith/stimulant) may be estimated from the surface geologic observation and temperature measurements. Therefore, overall, the new data reduction scheme would make in-situ thermal conductivity measurement more practical on planetary missions.

  16. Improved Data Reduction Algorithm for the Needle Probe Method Applied to In-Situ Thermal Conductivity Measurements of Lunar and Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.

    2013-01-01

    The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the volumetric heat capacity of the probe) can be measured and the other (the volumetric heat capacity of the regolith/stimulant) may be estimated from the surface geologic observation and temperature measurements. Therefore, overall, the new data reduction scheme would make in-situ thermal conductivity measurement more practical on planetary missions.

  17. Survey Study Investigating the Significance of Conference Participation to Undergraduate Research Students

    ERIC Educational Resources Information Center

    Mabrouk, Patricia Ann

    2009-01-01

    This article summarizes the findings of a survey study of undergraduate research (UR) students presenting their research at the fall 2007 and fall 2008 American Chemical Society (ACS) National Meetings. The purpose of the study is to probe the perceived benefits of conference participation to UR students. Results suggest that participation in…

  18. Teaching Reading Comprehension Skills to a Child with Autism Using Behaviour Skills Training

    ERIC Educational Resources Information Center

    Singh, Binita D.; Moore, Dennis W.; Furlonger, Brett E.; Anderson, Angelika; Busacca, Margherita L.; English, Derek L.

    2017-01-01

    A multiple probe design across skills was used to examine the effects of behaviour skills training (BST) on teaching four reading comprehension skills (predicting, questioning, clarifying, and summarizing) to a 7th grade student with autism. Following baseline, the student received 12 sessions of BST during which each skill was taught to…

  19. Gradual Disengagement: A Portrait of the 2008-09 Dropouts in the Baltimore City Schools. Research Report

    ERIC Educational Resources Information Center

    Mac Iver, Martha Abele

    2010-01-01

    This report paints a collective portrait of the Baltimore City Schools dropouts of 2008-09 to summarize some of the commonalities that join their individual stories together. After examining the surface level demographic characteristics of these dropouts, researchers probed more deeply into their behavioral characteristics in the years preceding…

  20. Apollo 11: A good ending to a bad decade

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The Gemini program and the Apollo program which culminated in landing a man on the moon and safely returning him to earth are highlighted. The space program in the aftermath of Apollo 11 is briefly summarized, including: Skylab, Apollo Soyuz, Mars and Venus probes, improved world communications, remote sensing of world resources, and finally, space shuttle.

  1. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  2. Near-infrared fluorescent probes in cancer imaging and therapy: an emerging field

    PubMed Central

    Yi, Xiaomin; Wang, Fuli; Qin, Weijun; Yang, Xiaojian; Yuan, Jianlin

    2014-01-01

    Near-infrared fluorescence (NIRF) imaging is an attractive modality for early cancer detection with high sensitivity and multi-detection capability. Due to convenient modification by conjugating with moieties of interests, NIRF probes are ideal candidates for cancer targeted imaging. Additionally, the combinatory application of NIRF imaging and other imaging modalities that can delineate anatomical structures extends fluorometric determination of biomedical information. Moreover, nanoparticles loaded with NIRF dyes and anticancer agents contribute to the synergistic management of cancer, which integrates the advantage of imaging and therapeutic functions to achieve the ultimate goal of simultaneous diagnosis and treatment. Appropriate probe design with targeting moieties can retain the original properties of NIRF and pharmacokinetics. In recent years, great efforts have been made to develop new NIRF probes with better photostability and strong fluorescence emission, leading to the discovery of numerous novel NIRF probes with fine photophysical properties. Some of these probes exhibit tumoricidal activities upon light radiation, which holds great promise in photothermal therapy, photodynamic therapy, and photoimmunotherapy. This review aims to provide a timely and concise update on emerging NIRF dyes and multifunctional agents. Their potential uses as agents for cancer specific imaging, lymph node mapping, and therapeutics are included. Recent advances of NIRF dyes in clinical use are also summarized. PMID:24648733

  3. Near-infrared fluorescent probes in cancer imaging and therapy: an emerging field.

    PubMed

    Yi, Xiaomin; Wang, Fuli; Qin, Weijun; Yang, Xiaojian; Yuan, Jianlin

    2014-01-01

    Near-infrared fluorescence (NIRF) imaging is an attractive modality for early cancer detection with high sensitivity and multi-detection capability. Due to convenient modification by conjugating with moieties of interests, NIRF probes are ideal candidates for cancer targeted imaging. Additionally, the combinatory application of NIRF imaging and other imaging modalities that can delineate anatomical structures extends fluorometric determination of biomedical information. Moreover, nanoparticles loaded with NIRF dyes and anticancer agents contribute to the synergistic management of cancer, which integrates the advantage of imaging and therapeutic functions to achieve the ultimate goal of simultaneous diagnosis and treatment. Appropriate probe design with targeting moieties can retain the original properties of NIRF and pharmacokinetics. In recent years, great efforts have been made to develop new NIRF probes with better photostability and strong fluorescence emission, leading to the discovery of numerous novel NIRF probes with fine photophysical properties. Some of these probes exhibit tumoricidal activities upon light radiation, which holds great promise in photothermal therapy, photodynamic therapy, and photoimmunotherapy. This review aims to provide a timely and concise update on emerging NIRF dyes and multifunctional agents. Their potential uses as agents for cancer specific imaging, lymph node mapping, and therapeutics are included. Recent advances of NIRF dyes in clinical use are also summarized.

  4. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  5. Aerosol Climate Time Series in ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2016-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  6. Viking lander camera radiometry calibration report, volume 2

    NASA Technical Reports Server (NTRS)

    Wolf, M. R.; Atwood, D. L.; Morrill, M. E.

    1977-01-01

    The requirements the performance validation, and interfaces for the RADCAM program, to convert Viking lander camera image data to radiometric units were established. A proposed algorithm is described, and an appendix summarizing the planned reduction of camera test data was included.

  7. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  8. Progress in molecular imaging in endoscopy and endomicroscopy for cancer imaging

    PubMed Central

    Khondee, Supang; Wang, Thomas D.

    2014-01-01

    Imaging is an essential tool for effective cancer management. Endoscopes are important medical instruments for performing in vivo imaging in hollow organs. Early detection of cancer can be achieved with surveillance using endoscopy, and has been shown to reduce mortality and to improve outcomes. Recently, great advancements have been made in endoscopic instruments, including new developments in optical designs, light sources, optical fibers, miniature scanners, and multimodal systems, allowing for improved resolution, greater tissue penetration, and multispectral imaging. In addition, progress has been made in the development of highly-specific optical probes, allowing for improved specificity for molecular targets. Integration of these new endoscopic instruments with molecular probes provides a unique opportunity for significantly improving patient outcomes and has potential to further improve early detection, image guided therapy, targeted therapy, and personalized medicine. This work summarizes current and evolving endoscopic technologies, and provides an overview of various promising optical molecular probes. PMID:23502247

  9. ESR imaging investigations of two-phase systems.

    PubMed

    Herrmann, Werner; Stösser, Reinhard; Borchert, Hans-Hubert

    2007-06-01

    The possibilities of electron spin resonance (ESR) and electron spin resonance imaging (ESRI) for investigating the properties of the spin probes TEMPO and TEMPOL in two-phase systems have been examined in the systems water/n-octanol, Miglyol/Miglyol, and Precirol/Miglyol. Phases and regions of the phase boundary could be mapped successfully by means of the isotropic hyperfine coupling constants, and, moreover, the quantification of rotational and lateral diffusion of the spin probes was possible. For the quantitative treatment of the micropolarity, a simplified empirical model was established on the basis of the Nernst distribution and the experimentally determined isotropic hyperfine coupling constants. The model does not only describe the summarized micropolarities of coexisting phases, but also the region of the phase boundary, where solvent molecules of different polarities and tendencies to form hydrogen bonds compete to interact with the NO group of the spin probe. Copyright 2007 John Wiley & Sons, Ltd.

  10. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  11. Predicting DNA hybridization kinetics from sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu

    2018-01-01

    Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.

  12. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  13. Setting up a probe based, closed tube real-time PCR assay for focused detection of variable sequence alterations.

    PubMed

    Becságh, Péter; Szakács, Orsolya

    2014-10-01

    During diagnostic workflow when detecting sequence alterations, sometimes it is important to design an algorithm that includes screening and direct tests in combination. Normally the use of direct test, which is mainly sequencing, is limited. There is an increased need for effective screening tests, with "closed tube" during the whole process and therefore decreasing the risk of PCR product contamination. The aim of this study was to design such a closed tube, detection probe based screening assay to detect different kind of sequence alterations in the exon 11 of the human c-kit gene region. Inside this region there are variable possible deletions and single nucleotide changes. During assay setup, more probe chemistry formats were screened and tested. After some optimization steps the taqman probe format was selected.

  14. ProbeDesigner: for the design of probesets for branched DNA (bDNA) signal amplification assays.

    PubMed

    Bushnell, S; Budde, J; Catino, T; Cole, J; Derti, A; Kelso, R; Collins, M L; Molino, G; Sheridan, P; Monahan, J; Urdea, M

    1999-05-01

    The sensitivity and specificity of branched DNA (bDNA) assays are derived in part through the judicious design of the capture and label extender probes. To minimize non-specific hybridization (NSH) events, which elevate assay background, candidate probes must be computer screened for complementarity with generic sequences present in the assay. We present a software application which allows for rapid and flexible design of bDNA probesets for novel targets. It includes an algorithm for estimating the magnitude of NSH contribution to background, a mechanism for removing probes with elevated contributions, a methodology for the simultaneous design of probesets for multiple targets, and a graphical user interface which guides the user through the design steps. The program is available as a commercial package through the Pharmaceutical Drug Discovery program at Chiron Diagnostics.

  15. Restoring the lattice of Si-based atom probe reconstructions for enhanced information on dopant positioning.

    PubMed

    Breen, Andrew J; Moody, Michael P; Ceguerra, Anna V; Gault, Baptiste; Araullo-Peters, Vicente J; Ringer, Simon P

    2015-12-01

    The following manuscript presents a novel approach for creating lattice based models of Sb-doped Si directly from atom probe reconstructions for the purposes of improving information on dopant positioning and directly informing quantum mechanics based materials modeling approaches. Sophisticated crystallographic analysis techniques are used to detect latent crystal structure within the atom probe reconstructions with unprecedented accuracy. A distortion correction algorithm is then developed to precisely calibrate the detected crystal structure to the theoretically known diamond cubic lattice. The reconstructed atoms are then positioned on their most likely lattice positions. Simulations are then used to determine the accuracy of such an approach and show that improvements to short-range order measurements are possible for noise levels and detector efficiencies comparable with experimentally collected atom probe data. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Wiener Chaos and Nonlinear Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lototsky, S.V.

    2006-11-15

    The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less

  17. The Voyager flights to Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The results of the mini-Grand Tour to Jupiter and Saturn by the Voyager 1 and 2 spacecraft are highlighted. Features of the spacecraft are depicted including the 11 instruments designed to probe the planets and their magnetic environments, the rings of Saturn, the fleets of satellites escorting the planets, and the interplanetary medium. Major scientific discoveries relating to these phenomena are summarized.

  18. A study of hydrocarbons associated with brines from DOE geopressured wells. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeley, D.F.

    1993-07-01

    Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.

  19. A study of hydrocarbons associated with brines from DOE geopressured wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeley, D.F.

    1993-01-01

    Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.

  20. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  1. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  2. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  3. Earth observing system. Output data products and input requirements, version 2.0. Volume 3: Algorithm summary tables and non-EOS data products

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindar; Swaroop, Anand

    1992-01-01

    Volume 3 assists Earth Observing System (EOS) investigators in locating required non-EOS data products by identifying their non-EOS input requirements and providing the information on data sets available at various Distributed Active Archive Centers (DAAC's), including those from Pathfinder Activities and Earth Probes. Volume 3 is intended to complement, not to duplicate, the the EOSDIS Science Data Plan (SDP) by providing detailed data set information which was not presented in the SDP. Section 9 of this volume discusses the algorithm summary tables containing information on retrieval algorithms, expected outputs and required input data. Section 10 describes the non-EOS input requirements of instrument teams and IDS investigators. Also described are the current and future data holdings of the original seven DAACS and data products planned from the future missions and projects including Earth Probes and Pathfinder Activities. Section 11 describes source of information used in compiling data set information presented in this volume. A list of data set attributes used to describe various data sets is presented in section 12 along with their descriptions. Finally, Section 13 presents the SPSO's future plan to improve this report .

  4. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  5. Human versus Robots in the Discovery and Crystallization of Gigantic Polyoxometalates.

    PubMed

    Duros, Vasilios; Grizou, Jonathan; Xuan, Weimin; Hosni, Zied; Long, De-Liang; Miras, Haralampos N; Cronin, Leroy

    2017-08-28

    The discovery of new gigantic molecules formed by self-assembly and crystal growth is challenging as it combines two contingent events; first is the formation of a new molecule, and second its crystallization. Herein, we construct a workflow that can be followed manually or by a robot to probe the envelope of both events and employ it for a new polyoxometalate cluster, Na 6 [Mo 120 Ce 6 O 366 H 12 (H 2 O) 78 ]⋅200 H 2 O (1) which has a trigonal-ring type architecture (yield 4.3 % based on Mo). Its synthesis and crystallization was probed using an active machine-learning algorithm developed by us to explore the crystallization space, the algorithm results were compared with those obtained by human experimenters. The algorithm-based search is able to cover ca. 9 times more crystallization space than a random search and ca. 6 times more than humans and increases the crystallization prediction accuracy to 82.4±0.7 % over 77.1±0.9 % from human experimenters. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  6. Experimental Overview of Direct Photon Results in Heavy Ion Collisions

    NASA Astrophysics Data System (ADS)

    Novitzky, Norbert

    2016-07-01

    Direct photons are color blind probes and thus they provide unique opportunities to study the colored medium created in heavy ion collisions. There are many different sources of direct photons each probing different physics processes as the system evolves. In basic 2 → 2 processes the prompt photons from primary hard scatterings offer the most precise measurements of the outgoing parton energy in the opposite direction. In heavy ion collisions the created medium emits photons as thermal radiation, whose rate and anisotropies provide a unique prospective on the properties and evolution of the system. Recent results on direct photons from the LHC and RHIC experiments are briefly summarized in this paper.

  7. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    PubMed

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  8. Spacecraft Charging in Geostationary Transfer Orbit

    NASA Technical Reports Server (NTRS)

    Parker, Linda Neergaard; Minow, Joseph I.

    2014-01-01

    The 700 km x 5.8 Re orbit of the two Van Allen Probes spacecraft provide a unique opportunity to investigate spacecraft charging in geostationary transfer orbits. We use records from the Helium Oxygen Proton Electron (HOPE) plasma spectrometer to identify candidate surface charging events based on the "ion line" charging signature in the ion records. We summarize the energetic particle environment and the conditions necessary for charging to occur in this environment. We discuss the altitude, duration, and magnitude of events observed in the Van Allen Probes from the beginning of the mission to present time. In addition, we explore what information the dual satellites provide on the spatial and temporal variations in the charging environments.

  9. Fiber Optic Surface Plasmon Resonance-Based Biosensor Technique: Fabrication, Advancement, and Application.

    PubMed

    Liang, Gaoling; Luo, Zewei; Liu, Kunping; Wang, Yimin; Dai, Jianxiong; Duan, Yixiang

    2016-05-03

    Fiber optic-based biosensors with surface plasmon resonance (SPR) technology are advanced label-free optical biosensing methods. They have brought tremendous progress in the sensing of various chemical and biological species. This review summarizes four sensing configurations (prism, grating, waveguide, and fiber optic) with two ways, attenuated total reflection (ATR) and diffraction, to excite the surface plasmons. Meanwhile, the designs of different probes (U-bent, tapered, and other probes) are also described. Finally, four major types of biosensors, immunosensor, DNA biosensor, enzyme biosensor, and living cell biosensor, are discussed in detail for their sensing principles and applications. Future prospects of fiber optic-based SPR sensor technology are discussed.

  10. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  11. Automated Non-Destructive Testing Array Evaluation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, T; Zavaljevski, N; Bakhtiari, S

    2004-12-24

    Automated Non-Destructive Testing Array Evaluation System (ANTARES) sofeware alogrithms were developed for use on X-probe(tm) data. Data used for algorithm development and preliminary perfomance determination was obtained for USNRC mock-up at Argone and data from EPRI.

  12. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter.

    PubMed

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  13. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  14. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  15. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  16. TRL-6 for JWST wavefront sensing and control

    NASA Astrophysics Data System (ADS)

    Feinberg, Lee D.; Dean, Bruce H.; Aronstein, David L.; Bowers, Charles W.; Hayden, William; Lyon, Richard G.; Shiri, Ron; Smith, J. Scott; Acton, D. Scott; Carey, Larkin; Contos, Adam; Sabatke, Erin; Schwenker, John; Shields, Duncan; Towell, Tim; Shi, Fang; Meza, Luis

    2007-09-01

    NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed.

  17. TRL-6 for JWST Wavefront Sensing and Control

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Dean, Bruce; Smith, Scott; Aronstein, David; Shiri, Ron; Lyon, Rick; Hayden, Bill; Bowers, Chuck; Acton, D. Scott; Shields, Duncan; hide

    2007-01-01

    NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed

  18. Cosmology with the cosmic web

    NASA Astrophysics Data System (ADS)

    Forero-Romero, J. E.

    2017-07-01

    This talk summarizes different algorithms that can be used to trace the cosmic web both in simulations and observations. We present different applications in galaxy formation and cosmology. To finalize, we show how the Dark Energy Spectroscopic Instrument (DESI) could be a good place to apply these techniques.

  19. Diffraction modeling of finite subband EFC probing on dark hole contrast with WFIRST-CGI shaped pupil coronagraph

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Krist, John; Nemati, Bijan

    2016-08-01

    Current coronagraph instrument design (CGI), as a part of a proposed NASA WFIRST (Wide-Field InfraRed Survey Telescope) mission, allocates two subband filters per full science band in order to contain system complexity and cost. We present our detailed investigation results on the adequacy of such limited number of finite subband filters in achieving full band dark hole contrast with shaped pupil coronagraph. The study is based on diffraction propagation modeling with realistic WFIRST optics, where each subband's complex field estimation is obtained, using Electric Field Conjugation (EFC) wavefront sensing / control algorithm, from pairwise pupil plane deformable mirror (DM) probing and image plane intensity averaging of the resulting fields of multiple (subband) wavelengths. Multiple subband choices and probing and control strategies are explored, including standard subband probing; mixed wavelength and/or weighted Jacobian matrix; subband probing with intensity subtraction; and extended subband probing with intensity subtraction. Overall, the investigation shows that the achievable contrast with limited number of finite subband EFC probing is about 2 2.5x worse than the designed post-EFC contrast for current SPC design. The result suggests that for future shaped pupil design, slightly larger over intended full bandwidth should be considered if it will be used with limited subbands for probing.

  20. Analyzing a 35-Year Hourly Data Record: Why So Difficult?

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    At the Goddard Distributed Active Archive Center, we have recently added a 35-Year record of output data from the North American Land Assimilation System (NLDAS) to the Giovanni web-based analysis and visualization tool. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure) offers a variety of data summarization and visualization to users that operate at the data center, obviating the need for users to download and read the data themselves for exploratory data analysis. However, the NLDAS data has proven surprisingly resistant to application of the summarization algorithms. Algorithms that were perfectly happy analyzing 15 years of daily satellite data encountered limitations both at the algorithm and system level for 35 years of hourly data. Failures arose, sometimes unexpectedly, from command line overflows, memory overflows, internal buffer overflows, and time-outs, among others. These serve as an early warning sign for the problems likely to be encountered by the general user community as they try to scale up to Big Data analytics. Indeed, it is likely that more users will seek to perform remote web-based analysis precisely to avoid the issues, or the need to reprogram around them. We will discuss approaches to mitigating the limitations and the implications for data systems serving the user communities that try to scale up their current techniques to analyze Big Data.

  1. Ionosphere Plasma State Determination in Low Earth Orbit from International Space Station Plasma Monitor

    NASA Technical Reports Server (NTRS)

    Kramer, Leonard

    2014-01-01

    A plasma diagnostic package is deployed on the International Space Station (ISS). The system - a Floating Potential Measurement Unit (FPMU) - is used by NASA to monitor the electrical floating potential of the vehicle to assure astronaut safety during extravehicular activity. However, data from the unit also reflects the ionosphere state and seems to represent an unutilized scientific resource in the form of an archive of scientific plasma state data. The unit comprises a Floating Potential probe and two Langmuir probes. There is also an unused but active plasma impedance probe. The data, at one second cadence, are collected, typically for a two week period surrounding extravehicular activity events. Data is also collected any time a visiting vehicle docks with ISS and also when any large solar events occur. The telemetry system is unusual because the package is mounted on a television camera stanchion and its data is impressed on a video signal that is transmitted to the ground and streamed by internet to two off center laboratory locations. The data quality has in the past been challenged by weaknesses in the integrated ground station and distribution systems. These issues, since mid-2010, have been largely resolved and the ground stations have been upgraded. Downstream data reduction has been developed using physics based modeling of the electron and ion collecting character in the plasma. Recursive algorithms determine plasma density and temperature from the raw Langmuir probe current voltage sweeps and this is made available in real time for situational awareness. The purpose of this paper is to describe and record the algorithm for data reduction and to show that the Floating probe and Langmuir probes are capable of providing long term plasma state measurement in the ionosphere. Geophysical features such as the Appleton anomaly and high latitude modulation at the edge of the Auroral zones are regularly observed in the nearly circular, 51 deg inclined, 400 km altitude ISS orbit. Evidence of waves in the ion collection current data is seen in geographic zones known to exhibit the spread-F phenomenon. An anomaly in the current collection characteristic of the cylindrical probe appears also too be organized by the geomagnetic field.

  2. Accuracy enhancement of point triangulation probes for linear displacement measurement

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  3. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  4. Restoration of high-resolution AFM images captured with broken probes

    NASA Astrophysics Data System (ADS)

    Wang, Y. F.; Corrigan, D.; Forman, C.; Jarvis, S.; Kokaram, A.

    2012-03-01

    A type of artefact is induced by damage of the scanning probe when the Atomic Force Microscope (AFM) captures a material surface structure with nanoscale resolution. This artefact has a dramatic form of distortion rather than the traditional blurring artefacts. Practically, it is not easy to prevent the damage of the scanning probe. However, by using natural image deblurring techniques in image processing domain, a comparatively reliable estimation of the real sample surface structure can be generated. This paper introduces a novel Hough Transform technique as well as a Bayesian deblurring algorithm to remove this type of artefact. The deblurring result is successful at removing blur artefacts in the AFM artefact images. And the details of the fibril surface topography are well preserved.

  5. Application of a Split-Fiber Probe to Velocity Measurement in the NASA Research Compressor

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2003-01-01

    A split-fiber probe was used to acquire unsteady data in a research compressor. The probe has two thin films deposited on a quartz cylinder 200 microns in diameter. A split-fiber probe allows simultaneous measurement of velocity magnitude and direction in a plane that is perpendicular to the sensing cylinder, because it has its circumference divided into two independent parts. Local heat transfer considerations indicated that the probe direction characteristic is linear in the range of flow incidence angles of +/- 35. Calibration tests confirmed this assumption. Of course, the velocity characteristic is nonlinear as is typical in thermal anemometry. The probe was used extensively in the NASA Glenn Research Center (GRC) low-speed, multistage axial compressor, and worked reliably during a test program of several months duration. The velocity and direction characteristics of the probe showed only minute changes during the entire test program. An algorithm was developed to decompose the probe signals into velocity magnitude and velocity direction. The averaged unsteady data were compared with data acquired by pneumatic probes. An overall excellent agreement between the averaged data acquired by a split-fiber probe and a pneumatic probe boosts confidence in the reliability of the unsteady content of the split-fiber probe data. To investigate the features of unsteady data, two methods were used: ensemble averaging and frequency analysis. The velocity distribution in a rotor blade passage was retrieved using the ensemble averaging method. Frequencies of excitation forces that may contribute to high cycle fatigue problems were identified by applying a fast Fourier transform to the absolute velocity data.

  6. Innovative signal processing for Johnson Noise thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N. Dianne Bull; Britton, Jr, Charles L.; Roberts, Michael

    This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.

  7. Review assessment support in Open Journal System using TextRank

    NASA Astrophysics Data System (ADS)

    Manalu, S. R.; Willy; Sundjaja, A. M.; Noerlina

    2017-01-01

    In this paper, a review assessment support in Open Journal System (OJS) using TextRank is proposed. OJS is an open-source journal management platform that provides a streamlined journal publishing workflow. TextRank is an unsupervised, graph-based ranking model commonly used as extractive auto summarization of text documents. This study applies the TextRank algorithm to summarize 50 article reviews from an OJS-based international journal. The resulting summaries are formed using the most representative sentences extracted from the reviews. The summaries are then used to help OJS editors in assessing a review’s quality.

  8. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  9. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  10. Electron and photon identification in the D0 experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.

    2014-06-01

    The electron and photon reconstruction and identification algorithms used by the D0 Collaboration at the Fermilab Tevatron collider are described. The determination of the electron energy scale and resolution is presented. Studies of the performance of the electron and photon reconstruction and identification are summarized.

  11. The complex magnetic field of Jupiter

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.; Ness, N. F.

    1975-01-01

    An analysis of the characteristics of the magnetic field of the planet Jupiter is presented. The data were obtained during the flight of Pioneer 11 space probe, using a high field triaxial fluxgate magnetometer. The data are analyzed in terms of traditional Schmitt normalized spherical harmonic expansion fitted to the observations in a least squares sense. Tables of data and graphs are provided to summarize the findings.

  12. JTSA: an open source framework for time series abstractions.

    PubMed

    Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana

    2015-10-01

    The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Aerosol Climate Time Series Evaluation In ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, T.; de Leeuw, G.; Pinnock, S.

    2015-12-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. By the end of 2015 full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which are also validated. The paper will summarize and discuss the results of major reprocessing and validation conducted in 2015. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  14. Improvements in Electron-Probe Microanalysis: Applications to Terrestrial, Extraterrestrial, and Space-Grown Materials

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2004-01-01

    Improvement in the accuracy of electron-probe microanalysis (EPMA) has been accomplished by critical assessment of standards, correction algorithms, and mass absorption coefficient data sets. Experimental measurement of relative x-ray intensities at multiple accelerating potential highlights errors in the absorption coefficient. The factor method has been applied to the evaluation of systematic errors in the analysis of semiconductor and silicate minds. Accurate EPMA of Martian soil stimulant is necessary in studies that build on Martian rover data in anticipation of missions to Mars.

  15. Correction of Spatial Bias in Oligonucleotide Array Data

    PubMed Central

    Lemieux, Sébastien

    2013-01-01

    Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083

  16. Method of identifying hairpin DNA probes by partial fold analysis

    DOEpatents

    Miller, Benjamin L [Penfield, NY; Strohsahl, Christopher M [Saugerties, NY

    2009-10-06

    Method of identifying molecular beacons in which a secondary structure prediction algorithm is employed to identify oligonucleotide sequences within a target gene having the requisite hairpin structure. Isolated oligonucleotides, molecular beacons prepared from those oligonucleotides, and their use are also disclosed.

  17. Method of identifying hairpin DNA probes by partial fold analysis

    DOEpatents

    Miller, Benjamin L.; Strohsahl, Christopher M.

    2008-10-28

    Methods of identifying molecular beacons in which a secondary structure prediction algorithm is employed to identify oligonucleotide sequences within a target gene having the requisite hairpin structure. Isolated oligonucleotides, molecular beacons prepared from those oligonucleotides, and their use are also disclosed.

  18. Target-cancer cell specific activatable fluorescence imaging Probes: Rational Design and in vivo Applications

    PubMed Central

    Kobayashi, Hisataka; Choyke, Peter L.

    2010-01-01

    CONSPECTUS Conventional imaging methods, such as angiography, computed tomography, magnetic resonance imaging and radionuclide imaging, rely on contrast agents (iodine, gadolinium, radioisotopes) that are “always on”. While these agents have proven clinically useful, they are not sufficiently sensitive because of the inadequate target to background ratio. A unique aspect of optical imaging is that fluorescence probes can be designed to be activatable, i.e. only “turned on” under certain conditions. These probes can be designed to emit signal only after binding a target tissue, greatly increasing sensitivity and specificity in the detection of disease. There are two basic types of activatable fluorescence probes; 1) conventional enzymatically activatable probes, which exist in the quenched state until activated by enzymatic cleavage mostly outside of the cells, and 2) newly designed target-cell specific activatable probes, which are quenched until activated in targeted cells by endolysosomal processing that results when the probe binds specific cell-surface receptors and is subsequently internalized. Herein, we present a review of the rational design and in vivo applications of target-cell specific activatable probes. Designing these probes based on their photo-chemical (e.g. activation strategy), pharmacological (e.g. biodistribution), and biological (e.g. target specificity) properties has recently allowed the rational design and synthesis of target-cell specific activatable fluorescence imaging probes, which can be conjugated to a wide variety of targeting molecules. Several different photo-chemical mechanisms have been utilized, each of which offers a unique capability for probe design. These include: self-quenching, homo- and hetero-fluorescence resonance energy transfer (FRET), H-dimer formation and photon-induced electron transfer (PeT). In addition, the repertoire is further expanded by the option for reversibility or irreversibility of the signal emitted using the aforementioned mechanisms. Given the wide range of photochemical mechanisms and properties, target-cell specific activatable probes possess considerable flexibility and can be adapted to specific diagnostic needs. Herein, we summarize the chemical, pharmacological, and biological basis of target-cell specific activatable imaging probes and discuss methods to successfully design such target-cell specific activatable probes for in vivo cancer imaging. PMID:21062101

  19. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  20. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  1. Parameter identification for nonlinear aerodynamic systems

    NASA Technical Reports Server (NTRS)

    Pearson, Allan E.

    1992-01-01

    Continuing work on frequency analysis for transfer function identification is discussed. A new study was initiated into a 'weighted' least squares algorithm within the context of the Fourier modulating function approach. The first phase of applying these techniques to the F-18 flight data is nearing completion, and these results are summarized.

  2. A stochastic approach to online vehicle state and parameter estimation, with application to inertia estimation for rollover prevention and battery charge/health estimation.

    DOT National Transportation Integrated Search

    2013-08-01

    This report summarizes research conducted at Penn State, Virginia Tech, and West Virginia University on the development of algorithms based on the generalized polynomial chaos (gpc) expansion for the online estimation of automotive and transportation...

  3. On the Scientific Foundations of Level 2 Fusion

    DTIC Science & Technology

    2004-03-01

    Development of Decision Aids”, CMIF Report 2-99, Feb 1999 KN2-49 Theories of Groups, Teams, Coalitions, etc -Adaptive Behaviors- • There is a huge literature...Investigations of Trust-related System Vulnerabilities in Aided, Adversarial Decision Making”, CMIF Report, Jan 2000 KN2-53 Summarizing Algorithm re Thing

  4. Analyzing Fourier Transforms for NASA DFRC's Fiber Optic Strain Sensing System

    NASA Technical Reports Server (NTRS)

    Fiechtner, Kaitlyn Leann

    2010-01-01

    This document provides a basic overview of the fiber optic technology used for sensing stress, strain, and temperature. Also, the document summarizes the research concerning speed and accuracy of the possible mathematical algorithms that can be used for NASA DFRC's Fiber Optic Strain Sensing (FOSS) system.

  5. FY15 Status of Immersion Phased Array Ultrasonic Probe Development and Performance Demonstration Results for Under Sodium Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Aaron A.; Larche, Michael R.; Mathews, Royce

    2015-09-01

    This Technical Letter Report (TLR) describes work conducted at the Pacific Northwest National Laboratory (PNNL) during FY 2015 on the under-sodium viewing (USV) PNNL project 58745, Work Package AT-15PN230102. This TLR satisfies PNNL’s M3AT-15PN2301027 milestone, and is focused on summarizing the design, development, and evaluation of a two-dimensional matrix phased-array probe referred to as serial number 3 (SN3). In addition, this TLR also provides the results from a performance demonstration of in-sodium target detection trials at 260°C using a one-dimensional 22-element linear array developed in FY14 and referred to as serial number 2 (SN2).

  6. Teaching Reading Comprehension Skills to a Child with Autism Using Behaviour Skills Training.

    PubMed

    Singh, Binita D; Moore, Dennis W; Furlonger, Brett E; Anderson, Angelika; Busacca, Margherita L; English, Derek L

    2017-10-01

    A multiple probe design across skills was used to examine the effects of behaviour skills training (BST) on teaching four reading comprehension skills (predicting, questioning, clarifying, and summarizing) to a 7th grade student with autism. Following baseline, the student received 12 sessions of BST during which each skill was taught to criterion. At each session, data was also collected on the accuracy of oral responses to 10 comprehension questions. BST was associated with clear gains in the participant's performance on each comprehension skill, along with concomitant gains in reading comprehension both on the daily probes and a standardized measure. Skills maintained at follow-up support the conclusion that BST was effective in improving the comprehension skills of a child with autism.

  7. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.

  8. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  9. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  10. Earth resources data analysis program, phase 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The efforts and findings of the Earth Resources Data Analysis Program are summarized. Results of a detailed study of the needs of EOD with respect to an applications development system (ADS) for the analysis of remotely sensed data, including an evaluation of four existing systems with respect to these needs are described. Recommendations as to possible courses for EOD to follow to obtain a viable ADS are presented. Algorithmic development comprised of several subtasks is discussed. These subtasks include the following: (1) two algorithms for multivariate density estimation; (2) a data smoothing algorithm; (3) a method for optimally estimating prior probabilities of unclassified data; and (4) further applications of the modified Cholesky decomposition in various calculations. Little effort was expended on task 3, however, two reports were reviewed.

  11. Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models

    NASA Technical Reports Server (NTRS)

    Won, Mark J.

    1999-01-01

    A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.

  12. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    PubMed

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Graph-based biomedical text summarization: An itemset mining and sentence clustering approach.

    PubMed

    Nasr Azadani, Mozhgan; Ghadiri, Nasser; Davoodijam, Ensieh

    2018-06-12

    Automatic text summarization offers an efficient solution to access the ever-growing amounts of both scientific and clinical literature in the biomedical domain by summarizing the source documents while maintaining their most informative contents. In this paper, we propose a novel graph-based summarization method that takes advantage of the domain-specific knowledge and a well-established data mining technique called frequent itemset mining. Our summarizer exploits the Unified Medical Language System (UMLS) to construct a concept-based model of the source document and mapping the document to the concepts. Then, it discovers frequent itemsets to take the correlations among multiple concepts into account. The method uses these correlations to propose a similarity function based on which a represented graph is constructed. The summarizer then employs a minimum spanning tree based clustering algorithm to discover various subthemes of the document. Eventually, it generates the final summary by selecting the most informative and relative sentences from all subthemes within the text. We perform an automatic evaluation over a large number of summaries using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. The results demonstrate that the proposed summarization system outperforms various baselines and benchmark approaches. The carried out research suggests that the incorporation of domain-specific knowledge and frequent itemset mining equips the summarization system in a better way to address the informativeness measurement of the sentences. Moreover, clustering the graph nodes (sentences) can enable the summarizer to target different main subthemes of a source document efficiently. The evaluation results show that the proposed approach can significantly improve the performance of the summarization systems in the biomedical domain. Copyright © 2018. Published by Elsevier Inc.

  14. The Impact of a Line Probe Assay Based Diagnostic Algorithm on Time to Treatment Initiation and Treatment Outcomes for Multidrug Resistant TB Patients in Arkhangelsk Region, Russia.

    PubMed

    Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei

    2016-01-01

    In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms. The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

  15. Eosinophilic pustular folliculitis: A proposal of diagnostic and therapeutic algorithms.

    PubMed

    Nomura, Takashi; Katoh, Mayumi; Yamamoto, Yosuke; Miyachi, Yoshiki; Kabashima, Kenji

    2016-11-01

    Eosinophilic pustular folliculitis (EPF) is a sterile inflammatory dermatosis of unknown etiology. In addition to classic EPF, which affects otherwise healthy individuals, an immunocompromised state can cause immunosuppression-associated EPF (IS-EPF), which may be referred to dermatologists in inpatient services for assessments. Infancy-associated EPF (I-EPF) is the least characterized subtype, being observed mainly in non-Japanese infants. Diagnosis of EPF is challenging because its lesions mimic those of other common diseases, such as acne and dermatomycosis. Furthermore, there is no consensus regarding the treatment for each subtype of EPF. Here, we created procedure algorithms that facilitate the diagnosis and selection of therapeutic options on the basis of published work available in the public domain. Our diagnostic algorithm comprised a simple flowchart to direct physicians toward proper diagnosis. Recommended regimens were summarized in an easy-to-comprehend therapeutic algorithm for each subtype of EPF. These algorithms would facilitate the diagnostic and therapeutic procedure of EPF. © 2016 Japanese Dermatological Association.

  16. Calibration and Sediment Load Algorithms for an Acoustic Sediment Flux Probe

    DTIC Science & Technology

    1994-06-01

    Office of Management and Budget, Paperwork Reduction Project (0704-0118) Washington DC 20503. 1 . AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT... 1 IL BACKGROUND THEORY...BACKSCATITER TESTS ........................ 32 1 . EVaimental Tank and Tst V smd ..................................................... 32 2. Sdiment Collection

  17. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  18. Probing optimal measurement configuration for optical scatterometry by the multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Xiuguo; Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Liu, Shiyuan

    2018-04-01

    Measurement configuration optimization (MCO) is a ubiquitous and important issue in optical scatterometry, whose aim is to probe the optimal combination of measurement conditions, such as wavelength, incidence angle, azimuthal angle, and/or polarization directions, to achieve a higher measurement precision for a given measuring instrument. In this paper, the MCO problem is investigated and formulated as a multi-objective optimization problem, which is then solved by the multi-objective genetic algorithm (MOGA). The case study on the Mueller matrix scatterometry for the measurement of a Si grating verifies the feasibility of the MOGA in handling the MCO problem in optical scatterometry by making a comparison with the Monte Carlo simulations. Experiments performed at the achieved optimal measurement configuration also show good agreement between the measured and calculated best-fit Mueller matrix spectra. The proposed MCO method based on MOGA is expected to provide a more general and practical means to solve the MCO problem in the state-of-the-art optical scatterometry.

  19. A parallel implementation of the Wuchty algorithm with additional experimental filters to more thoroughly explore RNA conformational space.

    PubMed

    Stone, Jonathan W; Bleckley, Samuel; Lavelle, Sean; Schroeder, Susan J

    2015-01-01

    We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.

  20. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  1. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  2. 3D ultrasound image guidance system used in RF uterine adenoma and uterine bleeding ablation system

    NASA Astrophysics Data System (ADS)

    Ding, Mingyue; Luo, Xiaoan; Cai, Chao; Zhou, Chengping; Fenster, Aaron

    2006-03-01

    Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese women. Many women lose their fertility from these diseases. Currently, a minimally invasive ablation system using an RF button electrode is being used in Chinese hospitals to destroy tumor cells or stop bleeding. In this paper, we report on a 3D US guidance system developed to avoid accidents or death of the patient by inaccurate localization of the tumor position during treatment. A 3D US imaging system using a rotational scanning approach of an abdominal probe was built. In order to reduce the distortion produced when the rotational axis is not collinear with the central beam of the probe, a new 3D reconstruction algorithm is used. Then, a fast 3D needle segmentation algorithm is used to find the electrode. Finally, the tip of electrode is determined along the segmented 3D needle and the whole electrode is displayed. Experiments with a water phantom demonstrated the feasibility of our approach.

  3. In vivo light scattering for the detection of cancerous and precancerous lesions of the cervix

    PubMed Central

    Mourant, Judith R.; Powers, Tamara M.; Bocklage, Thérese J.; Greene, Heather M.; Dorin, Maxine H.; Waxman, Alan G.; Zsemlye, Meggan M.; Smith, Harriet O.

    2009-01-01

    A non-invasive optical diagnostic system for detection of cancerous and precancerous lesions of the cervix was evaluated, in vivo. The optical system included a fiber optic probe designed to measure polarized and unpolarized light transport properties of a small volume of tissue. An algorithm for diagnosing tissue based on the optical measurements was developed which used four optical properties, three of which were related to light scattering properties and the fourth of which was related to hemoglobin concentration. A sensitivity of ∼77% and specificities in the mid 60's were obtained for separating high grade squamous intraepithelial lesions and cancer from other pathologies and normal tissue. The use of different cross-validation methods in algorithm development is analyzed and the relative difficulties of diagnosing certain pathologies is assessed. Furthermore, the robustness of the optical system for use by different doctors and to changes in fiber optic probe were also assessed and potential improvements in the optical system are discussed. PMID:19340117

  4. Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts

    PubMed Central

    Baker, Wesley B.; Parthasarathy, Ashwin B.; Ko, Tiffany S.; Busch, David R.; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C.; Durduran, Turgut; Greenberg, Joel H.; Kung, David K.; Yodh, Arjun G.

    2015-01-01

    Abstract. We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm’s ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy. PMID:26301255

  5. Sounding rocket flight report, MUMP 9 and MUMP 10

    NASA Technical Reports Server (NTRS)

    Grassl, H. J.

    1971-01-01

    The results of the launching of two-Marshall-University of Michigan Probes (MUMP 9 and MUMP 10), Nike-Tomahawk sounding rocket payloads, are summarized. The MUMP is similar to the thermosphere probe, an ejectable instrument package for studying the variability of the earth's atmospheric parameters. The MUMP 9 payload included an omegatron mass analyzer, a molecular fluorescence densitometer, a mini-tilty filter, and a lunar position sensor. This complement of instruments permitted the determination of the molecular nitrogen density and temperature in the altitude range from approximately 143 to 297 km over Wallops Island, Virginia, during January 1971. The MUMP 10 payload included an omegatron mass analyzer, an electron temperature probe, a cryogenic densitometer, and a solar position sensor. These instruments permitted the determination of the molecular nitrogen density and temperature and the charged particle density and temperature in the altitude range from approximately 145 to 290 km over Wallops Island during the afternoon preceding the MUMP 9 launch.

  6. Flowing Magnetized Plasma experiment

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Si, Jiahe

    2006-10-01

    Results from the Flowing Magnetized Plasma experiment at Los Alamos are summarized. Plasmas are produced using a modified coaxial plasma gun with a center electrode extending into a cylindrical vacuum tank with 0.75 m in radius and 4.5 m long. The basic diagnostics are Bdot probes for edge and internal magnetic field, Mach probes and Doppler spectroscopy for plasma flow in the axial and azimuthal directions, and Langmuir probes for plasma floating potential, electron density and temperature. We have found two different plasma flow patterns associated with distinct IV characteristics of the coaxial plasma gun, indicating axial flow is strongly correlated with the plasma ejection from the plasma gun. Global electromagnetic oscillations at frequencies below ion cyclotron frequency are observed, indicating that familiar waves at these frequencies, e.g. Alfven wave or drift wave, are strongly modified by the finite plasma beta. We eliminate the possibility of ion sound waves since the ion and electron temperatures are comparable, and therefore, ion sound waves are strongly Landau damped.

  7. Dark energy two decades after: observables, probes, consistency tests.

    PubMed

    Huterer, Dragan; Shafer, Daniel L

    2018-01-01

    The discovery of the accelerating universe in the late 1990s was a watershed moment in modern cosmology, as it indicated the presence of a fundamentally new, dominant contribution to the energy budget of the universe. Evidence for dark energy, the new component that causes the acceleration, has since become extremely strong, owing to an impressive variety of increasingly precise measurements of the expansion history and the growth of structure in the universe. Still, one of the central challenges of modern cosmology is to shed light on the physical mechanism behind the accelerating universe. In this review, we briefly summarize the developments that led to the discovery of dark energy. Next, we discuss the parametric descriptions of dark energy and the cosmological tests that allow us to better understand its nature. We then review the cosmological probes of dark energy. For each probe, we briefly discuss the physics behind it and its prospects for measuring dark energy properties. We end with a summary of the current status of dark energy research.

  8. 16-foot transonic tunnel test section flowfield survey

    NASA Technical Reports Server (NTRS)

    Yetter, J. A.; Abeyounis, W. K.

    1994-01-01

    A flow survey has been made of the test section of the NASA Langley Research Center 16-Foot Transonic Tunnel at subsonic and supersonic speeds. The survey was performed using five five-hole pyramid-head probes mounted at 14 inch intervals on a survey rake. Probes were calibrated at freestream Mach numbers from 0.50 to 0.95 and from 1.18 to 1.23. Flowfield surveys were made at Mach numbers from 0.50 to 0.90 and at Mach 1.20. The surveys were made at tunnel stations 130.6, 133.6, and 136.0. By rotating the survey rake through 180 degrees, a cylindrical volume of the test section 4.7 feet in diameter and 5.4 feet long centered about the tunnel centerline was surveyed. Survey results showing the measured test section upflow and sideflow characteristics and local Mach number distributions are presented. The report documents the survey probe calibration techniques used, summarizes the procedural problems encountered during testing, and identifies the data discrepancies observed during the post-test data analysis.

  9. Using thermal phase curves to probe the climate of potentially habitable planets

    NASA Astrophysics Data System (ADS)

    Kataria, Tiffany

    2018-01-01

    Thermal phase-curve observations probe the variation in emitted flux of a planet with phase, or longitude. When conducted spectroscopically, they allow us to probe the two-dimensional temperature structure in both longitude and altitude, which directly relate to the planet’s circulation and chemistry. In the case of small, potentially habitable exoplanets, spectroscopic phase-curve observations can provide us with direct evidence that the planet is capable of sustaining liquid water from measurements of its brightness temperature, and allow us to distinguish between a ‘airless’ body and one that has an appreciable atmosphere. In this talk I will summarize efforts to characterize exoplanets smaller than Neptune with phase-curve observations and emission spectroscopy using the Spitzer and Hubble Space Telescopes. I will then discuss how these ‘lessons learned’ can be applied to future efforts to characterize potentially habitable planets with phase-curve observations using JWST and future facilities such as the Origins Space Telescope (OST).

  10. Scientific rationale and concepts for in situ probe exploration of Uranus and Neptune

    NASA Astrophysics Data System (ADS)

    Mousis, O.; Atkinson, D.; Amato, M.; Aslam, S.; Atreya, S.; Blanc, M.; Brugger, B.; Calcutt, S.; Cavalié, T.; Charnoz, S.; Coustenis, A.; Deleuil, M.; Dobrijevic, M.; Encrenaz, T.; Ferri, F.; Fletcher, L.; Guillot, T.; Hartogh, P.; Hofstadter, M.; Hueso, R.

    2017-09-01

    Uranus and Neptune, referred to as ice giants, are fundamentally different from the better-known gas giants (Jupiter and Saturn). Exploration of an ice giant system is a high-priority science objective, as these systems (including the magnetosphere, satellites, rings, atmosphere, and interior) challenge our understanding of planetary formation and evolution. The importance of the ice giants is reflected in NASA's 2011 Decadal Survey, comments from ESA's SSC in response to L2/L3 mission proposals and results of the 2017 NASA/ESA Ice Giants study. A crucial part of exploration of the ice giants is in situ sampling of the atmosphere via an atmospheric probe. A probe would bring insights in two broad themes: the formation history of our Solar System and the processes at play in planetary atmospheres. Here we summarize the science driver for in situ measurements at these two planets and discuss possible mission concepts that would be consistent with the constraints of ESA M-class missions.

  11. A robust multi-frequency mixing algorithm for suppression of rivet signal in GMR inspection of riveted structures

    NASA Astrophysics Data System (ADS)

    Safdernejad, Morteza S.; Karpenko, Oleksii; Ye, Chaofeng; Udpa, Lalita; Udpa, Satish

    2016-02-01

    The advent of Giant Magneto-Resistive (GMR) technology permits development of novel highly sensitive array probes for Eddy Current (EC) inspection of multi-layer riveted structures. Multi-frequency GMR measurements with different EC pene-tration depths show promise for detection of bottom layer notches at fastener sites. However, the distortion of the induced magnetic field due to flaws is dominated by the strong fastener signal, which makes defect detection and classification a challenging prob-lem. This issue is more pronounced for ferromagnetic fasteners that concentrate most of the magnetic flux. In the present work, a novel multi-frequency mixing algorithm is proposed to suppress rivet signal response and enhance defect detection capability of the GMR array probe. The algorithm is baseline-free and does not require any assumptions about the sample geometry being inspected. Fastener signal suppression is based upon the random sample consensus (RANSAC) method, which iteratively estimates parameters of a mathematical model from a set of observed data with outliers. Bottom layer defects at fastener site are simulated as EDM notches of different length. Performance of the proposed multi-frequency mixing approach is evaluated on finite element data and experimental GMR measurements obtained with unidirectional planar current excitation. Initial results are promising demonstrating the feasibility of the approach.

  12. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  13. FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data

    NASA Astrophysics Data System (ADS)

    Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael

    2014-04-01

    Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics.

  14. FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data

    PubMed Central

    Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael

    2014-01-01

    Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics. PMID:24694686

  15. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  16. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  17. Combinatorial Algorithms to Enable Computational Science and Engineering: Work from the CSCAPES Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric

    2015-01-16

    This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellowsmore » have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.« less

  18. Treatment Algorithms Based on Tumor Molecular Profiling: The Essence of Precision Medicine Trials.

    PubMed

    Le Tourneau, Christophe; Kamal, Maud; Tsimberidou, Apostolia-Maria; Bedard, Philippe; Pierron, Gaëlle; Callens, Céline; Rouleau, Etienne; Vincent-Salomon, Anne; Servant, Nicolas; Alt, Marie; Rouzier, Roman; Paoletti, Xavier; Delattre, Olivier; Bièche, Ivan

    2016-04-01

    With the advent of high-throughput molecular technologies, several precision medicine (PM) studies are currently ongoing that include molecular screening programs and PM clinical trials. Molecular profiling programs establish the molecular profile of patients' tumors with the aim to guide therapy based on identified molecular alterations. The aim of prospective PM clinical trials is to assess the clinical utility of tumor molecular profiling and to determine whether treatment selection based on molecular alterations produces superior outcomes compared with unselected treatment. These trials use treatment algorithms to assign patients to specific targeted therapies based on tumor molecular alterations. These algorithms should be governed by fixed rules to ensure standardization and reproducibility. Here, we summarize key molecular, biological, and technical criteria that, in our view, should be addressed when establishing treatment algorithms based on tumor molecular profiling for PM trials. © The Author 2015. Published by Oxford University Press.

  19. Radiation and scattering from bodies of translation. Volume 2: User's manual, computer program documentation

    NASA Astrophysics Data System (ADS)

    Medgyesi-Mitschang, L. N.; Putnam, J. M.

    1980-04-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.

  20. Electron Paramagnetic Resonance Measurements of Reactive Oxygen Species by Cyclic Hydroxylamine Spin Probes.

    PubMed

    Dikalov, Sergey I; Polienko, Yuliya F; Kirilyuk, Igor

    2018-05-20

    Oxidative stress contributes to numerous pathophysiological conditions such as development of cancer, neurodegenerative, and cardiovascular diseases. A variety of measurements of oxidative stress markers in biological systems have been developed; however, many of these methods are not specific, can produce artifacts, and do not directly detect the free radicals and reactive oxygen species (ROS) that cause oxidative stress. Electron paramagnetic resonance (EPR) is a unique tool that allows direct measurements of free radical species. Cyclic hydroxylamines are useful and convenient molecular probes that readily react with ROS to produce stable nitroxide radicals, which can be quantitatively measured by EPR. In this work, we critically review recent applications of various cyclic hydroxylamine spin probes in biology to study oxidative stress, their advantages, and the shortcomings. Recent Advances: In the past decade, a number of new cyclic hydroxylamine spin probes have been developed and their successful application for ROS measurement using EPR has been published. These new state-of-the-art methods provide improved selectivity and sensitivity for in vitro and in vivo studies. Although cyclic hydroxylamine spin probes EPR application has been previously described, there has been lack of translation of these new methods into biomedical research, limiting their widespread use. This work summarizes "best practice" in applications of cyclic hydroxylamine spin probes to assist with EPR studies of oxidative stress. Additional studies to advance hydroxylamine spin probes from the "basic science" to biomedical applications are needed and could lead to better understanding of pathological conditions associated with oxidative stress. Antioxid. Redox Signal. 28, 1433-1443.

  1. Using Morpholinos to Probe Gene Networks in Sea Urchin.

    PubMed

    Materna, Stefan C

    2017-01-01

    The control processes that underlie the progression of development can be summarized in maps of gene regulatory networks (GRNs). A critical step in their assembly is the systematic perturbation of network candidates. In sea urchins the most important method for interfering with expression in a gene-specific way is application of morpholino antisense oligonucleotides (MOs). MOs act by binding to their sequence complement in transcripts resulting in a block in translation or a change in splicing and thus result in a loss of function. Despite the tremendous success of this technology, recent comparisons to mutants generated by genome editing have led to renewed criticism and challenged its reliability. As with all methods based on sequence recognition, MOs are prone to off-target binding that may result in phenotypes that are erroneously ascribed to the loss of the intended target. However, the slow progression of development in sea urchins has enabled extremely detailed studies of gene activity in the embryo. This wealth of knowledge paired with the simplicity of the sea urchin embryo enables careful analysis of MO phenotypes through a variety of methods that do not rely on terminal phenotypes. This article summarizes the use of MOs in probing GRNs and the steps that should be taken to assure their specificity.

  2. Mismatch and G-Stack Modulated Probe Signals on SNP Microarrays

    PubMed Central

    Binder, Hans; Fasold, Mario; Glomb, Torsten

    2009-01-01

    Background Single nucleotide polymorphism (SNP) arrays are important tools widely used for genotyping and copy number estimation. This technology utilizes the specific affinity of fragmented DNA for binding to surface-attached oligonucleotide DNA probes. We analyze the variability of the probe signals of Affymetrix GeneChip SNP arrays as a function of the probe sequence to identify relevant sequence motifs which potentially cause systematic biases of genotyping and copy number estimates. Methodology/Principal Findings The probe design of GeneChip SNP arrays enables us to disentangle different sources of intensity modulations such as the number of mismatches per duplex, matched and mismatched base pairings including nearest and next-nearest neighbors and their position along the probe sequence. The effect of probe sequence was estimated in terms of triple-motifs with central matches and mismatches which include all 256 combinations of possible base pairings. The probe/target interactions on the chip can be decomposed into nearest neighbor contributions which correlate well with free energy terms of DNA/DNA-interactions in solution. The effect of mismatches is about twice as large as that of canonical pairings. Runs of guanines (G) and the particular type of mismatched pairings formed in cross-allelic probe/target duplexes constitute sources of systematic biases of the probe signals with consequences for genotyping and copy number estimates. The poly-G effect seems to be related to the crowded arrangement of probes which facilitates complex formation of neighboring probes with at minimum three adjacent G's in their sequence. Conclusions The applied method of “triple-averaging” represents a model-free approach to estimate the mean intensity contributions of different sequence motifs which can be applied in calibration algorithms to correct signal values for sequence effects. Rules for appropriate sequence corrections are suggested. PMID:19924253

  3. A new probe using hybrid virus-dye nanoparticles for near-infrared fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Wu, Changfeng; Barnhill, Hannah; Liang, Xiaoping; Wang, Qian; Jiang, Huabei

    2005-11-01

    A fluorescent probe based on bionanoparticle cowpea mosaic virus has been developed for near-infrared fluorescence tomography. A unique advantage of this probe is that over 30 dye molecules can be loaded onto each viral nanoparticle with an average diameter of 30 nm, making high local dye concentration (∼1.8 mM) possible without significant fluorescence quenching. This ability of high loading of local dye concentration would increase the signal-to-noise ratio considerably, thus sensitivity for detection. We demonstrate successful tomographic fluorescence imaging of a target containing the virus-dye nanoparticles embedded in a tissue-like phantom. Tomographic fluorescence data were obtained through a multi-channel frequency-domain system and the spatial maps of fluorescence quantum yield were recovered with a finite-element-based reconstruction algorithm.

  4. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  5. Handheld optical coherence tomography-reflectance confocal microscopy probe for detection of basal cell carcinoma and delineation of margins

    NASA Astrophysics Data System (ADS)

    Iftimia, Nicusor; Yélamos, Oriol; Chen, Chih-Shan J.; Maguluri, Gopi; Cordova, Miguel A.; Sahu, Aditi; Park, Jesung; Fox, William; Alessi-Fox, Christi; Rajadhyaksha, Milind

    2017-07-01

    We present a hand-held implementation and preliminary evaluation of a combined optical coherence tomography (OCT) and reflectance confocal microscopy (RCM) probe for detecting and delineating the margins of basal cell carcinomas (BCCs) in human skin in vivo. A standard OCT approach (spectrometer-based) with a central wavelength of 1310 nm and 0.11 numerical aperture (NA) was combined with a standard RCM approach (830-nm wavelength and 0.9 NA) into a common path hand-held probe. Cross-sectional OCT images and enface RCM images are simultaneously displayed, allowing for three-dimensional microscopic assessment of tumor morphology in real time. Depending on the subtype and depth of the BCC tumor and surrounding skin conditions, OCT and RCM imaging are able to complement each other, the strengths of each helping overcome the limitations of the other. Four representative cases are summarized, out of the 15 investigated in a preliminary pilot study, demonstrating how OCT and RCM imaging may be synergistically combined to more accurately detect BCCs and more completely delineate margins. Our preliminary results highlight the potential benefits of combining the two technologies within a single probe to potentially guide diagnosis as well as treatment of BCCs.

  6. Probing the space of toric quiver theories

    NASA Astrophysics Data System (ADS)

    Hewlett, Joseph; He, Yang-Hui

    2010-03-01

    We demonstrate a practical and efficient method for generating toric Calabi-Yau quiver theories, applicable to both D3 and M2 brane world-volume physics. A new analytic method is presented at low order parametres and an algorithm for the general case is developed which has polynomial complexity in the number of edges in the quiver. Using this algorithm, carefully implemented, we classify the quiver diagram and assign possible superpotentials for various small values of the number of edges and nodes. We examine some preliminary statistics on this space of toric quiver theories.

  7. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  8. Tissue-like Neural Probes for Understanding and Modulating the Brain.

    PubMed

    Hong, Guosong; Viveros, Robert D; Zwang, Theodore J; Yang, Xiao; Lieber, Charles M

    2018-03-19

    Electrophysiology tools have contributed substantially to understanding brain function, yet the capabilities of conventional electrophysiology probes have remained limited in key ways because of large structural and mechanical mismatches with respect to neural tissue. In this Perspective, we discuss how the general goal of probe design in biochemistry, that the probe or label have a minimal impact on the properties and function of the system being studied, can be realized by minimizing structural, mechanical, and topological differences between neural probes and brain tissue, thus leading to a new paradigm of tissue-like mesh electronics. The unique properties and capabilities of the tissue-like mesh electronics as well as future opportunities are summarized. First, we discuss the design of an ultraflexible and open mesh structure of electronics that is tissue-like and can be delivered in the brain via minimally invasive syringe injection like molecular and macromolecular pharmaceuticals. Second, we describe the unprecedented tissue healing without chronic immune response that leads to seamless three-dimensional integration with a natural distribution of neurons and other key cells through these tissue-like probes. These unique characteristics lead to unmatched stable long-term, multiplexed mapping and modulation of neural circuits at the single-neuron level on a year time scale. Last, we offer insights on several exciting future directions for the tissue-like electronics paradigm that capitalize on their unique properties to explore biochemical interactions and signaling in a "natural" brain environment.

  9. The advanced progress of precoding technology in 5g system

    NASA Astrophysics Data System (ADS)

    An, Chenyi

    2017-09-01

    With the development of technology, people began to put forward higher requirements for the mobile system, the emergence of the 5G subvert the track of the development of mobile communication technology. In the research of the core technology of 5G mobile communication, large scale MIMO, and precoding technology is a research hotspot. At present, the research on precoding technology in 5G system analyzes the various methods of linear precoding, the maximum ratio transmission (MRT) precoding algorithm, zero forcing (ZF) precoding algorithm, minimum mean square error (MMSE) precoding algorithm based on maximum signal to leakage and noise ratio (SLNR). Precoding algorithms are analyzed and summarized in detail. At the same time, we also do some research on nonlinear precoding methods, such as dirty paper precoding, THP precoding algorithm and so on. Through these analysis, we can find the advantages and disadvantages of each algorithm, as well as the development trend of each algorithm, grasp the development of the current 5G system precoding technology. Therefore, the research results and data of this paper can be used as reference for the development of precoding technology in 5G system.

  10. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  11. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  12. Tests of chameleon gravity

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Sakstein, Jeremy

    2018-03-01

    Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.

  13. The GLAS Algorithm Theoretical Basis Document for Laser Footprint Location (Geolocation) and Surface Profiles

    NASA Technical Reports Server (NTRS)

    Shutz, Bob E.; Urban, Timothy J.

    2014-01-01

    This ATBD summarizes (and links with other ATBDs) the elements used to obtain the geolocated GLAS laser spot location, with respect to the Earth Center of Mass. Because of the approach used, the reference frame used to express the geolocation is linked to the reference frame used for POD and PAD, which are related to the ITRF. The geolocated spot coordinates (which includes the elevation or height, with respect to an adopted reference ellipsoid) is the inferred position of the laser spot, since the spot location is not directly measured. This document also summarizes the GLAS operation time periods.

  14. Using linear polarization for LWIR hyperspectral sensing of liquid contaminants

    NASA Astrophysics Data System (ADS)

    Thériault, Jean-Marc; Fortin, Gilles; Lacasse, Paul; Bouffard, François; Lavoie, Hugo

    2013-09-01

    We report and analyze recent results obtained with the MoDDIFS sensor (Multi-option Differential Detection and Imaging Fourier Spectrometer) for the passive polarization sensing of liquid contaminants in the long wave infrared (LWIR). Field measurements of polarized spectral radiance done on ethylene glycol and SF96 probed at distances of 6.5 and 450 meters, respectively, have been used to develop and test a GLRT-type detection algorithm adapted for liquid contaminants. The GLRT detection results serve to establish the potential and advantage of probing the vertical and horizontal linear hyperspectral polarization components for improving liquid contaminants detection.

  15. A Web-Based Search Service to Support Imaging Spectrometer Instrument Operations

    NASA Technical Reports Server (NTRS)

    Smith, Alexander; Thompson, David R.; Sayfi, Elias; Xing, Zhangfan; Castano, Rebecca

    2013-01-01

    Imaging spectrometers yield rich and informative data products, but interpreting them demands time and expertise. There is a continual need for new algorithms and methods for rapid first-draft analyses to assist analysts during instrument opera-tions. Intelligent data analyses can summarize scenes to draft geologic maps, searching images to direct op-erator attention to key features. This validates data quality while facilitating rapid tactical decision making to select followup targets. Ideally these algorithms would operate in seconds, never grow bored, and be free from observation bias about the kinds of mineral-ogy that will be found.

  16. The art and science of switching antipsychotic medications, part 2.

    PubMed

    Weiden, Peter J; Miller, Alexander L; Lambert, Tim J; Buckley, Peter F

    2007-01-01

    In the presentation "Switching and Metabolic Syndrome," Weiden summarizes reasons to switch antipsychotics, highlighting weight gain and other metabolic adverse events as recent treatment targets. In "Texas Medication Algorithm Project (TMAP)," Miller reviews the TMAP study design, discusses results related to the algorithm versus treatment as usual, and concludes with the implications of the study. Lambert's presentation, "Dosing and Titration Strategies to Optimize Patient Outcome When Switching Antipsychotic Therapy," reviews the decision-making process when switching patients' medication, addresses dosing and titration strategies to effectively transition between medications, and examines other factors to consider when switching pharmacotherapy.

  17. Dynamics Modelling of Transmission Gear Rattle and Analysis on Influence Factors

    NASA Astrophysics Data System (ADS)

    He, Xiaona; Zhang, Honghui

    2018-02-01

    Based on the vibration dynamics modeling for the single stage gear of transmission system, this paper is to understand the mechanism of transmission rattle. The dynamic model response using MATLAB and Runge-Kutta algorithm is analyzed, and the ways for reducing the rattle noise of the automotive transmission is summarized.

  18. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  19. Direct comparison of the FibroScan XL and M probes for assessment of liver fibrosis in obese and nonobese patients.

    PubMed

    Durango, Esteban; Dietrich, Christian; Seitz, Helmut Karl; Kunz, Cornelia Ursula; Pomier-Layrargues, Gilles T; Duarte-Rojo, Andres; Beaton, Melanie; Elkhashab, Magdy; Myers, Robert P; Mueller, Sebastian

    2013-01-01

    A novel Fibroscan XL probe has recently been introduced and validated for obese patients, and has a diagnostic accuracy comparable with that of the standard M probe. The aim of this study was to analyze and understand the differences between these two probes in nonobese patients, to identify underlying causes for these differences, and to develop a practical algorithm to translate results for the XL probe to those for the M probe. Both probes were directly compared first in copolymer phantoms of varying stiffness (4.8, 11, and 40 kPa) and then in 371 obese and nonobese patients (body mass index, range 17.2-72.4) from German (n = 129) and Canadian (n = 242) centers. Liver stiffness values for both probes correlated better in phantoms than in patients (r = 0.98 versus 0.82, P < 0.001). Significantly more patients could be measured successfully using the XL probe than the M probe (98.4% versus 85.2%, respectively, P < 0.001) while the M probe produced a smaller interquartile range (21% versus 32%). Failure of the M probe to measure liver stiffness was not only observed in patients with a high body mass index and long skin-liver capsule distance but also in some nonobese patients (n = 10) due to quenching of the signal from subcutaneous fat tissue. In contrast with the phantoms, the XL probe consistently produced approximately 20% lower liver stiffness values in humans compared with the M probe. A long skin-liver capsule distance and a high degree of steatosis were responsible for this discordance. Adjustment of cutoff values for the XL probe (<5.5, 5.5-7, 7-10, and >10 kPa for F0, F1-2, F3, and F4 fibrosis, respectively) significantly improved agreement between the two probes from r = 0.655 to 0.679. Liver stiffness can be measured in significantly more obese and nonobese patients using the XL probe than the M probe. However, the XL probe is less accurate and adjusted cutoff values are required.

  20. Direct comparison of the FibroScan XL and M probes for assessment of liver fibrosis in obese and nonobese patients

    PubMed Central

    Durango, Esteban; Dietrich, Christian; Seitz, Helmut Karl; Kunz, Cornelia Ursula; Pomier-Layrargues, Gilles T; Duarte-Rojo, Andres; Beaton, Melanie; Elkhashab, Magdy; Myers, Robert P; Mueller, Sebastian

    2013-01-01

    Background A novel Fibroscan XL probe has recently been introduced and validated for obese patients, and has a diagnostic accuracy comparable with that of the standard M probe. The aim of this study was to analyze and understand the differences between these two probes in nonobese patients, to identify underlying causes for these differences, and to develop a practical algorithm to translate results for the XL probe to those for the M probe. Methods and results Both probes were directly compared first in copolymer phantoms of varying stiffness (4.8, 11, and 40 kPa) and then in 371 obese and nonobese patients (body mass index, range 17.2–72.4) from German (n = 129) and Canadian (n = 242) centers. Liver stiffness values for both probes correlated better in phantoms than in patients (r = 0.98 versus 0.82, P < 0.001). Significantly more patients could be measured successfully using the XL probe than the M probe (98.4% versus 85.2%, respectively, P < 0.001) while the M probe produced a smaller interquartile range (21% versus 32%). Failure of the M probe to measure liver stiffness was not only observed in patients with a high body mass index and long skin-liver capsule distance but also in some nonobese patients (n = 10) due to quenching of the signal from subcutaneous fat tissue. In contrast with the phantoms, the XL probe consistently produced approximately 20% lower liver stiffness values in humans compared with the M probe. A long skin-liver capsule distance and a high degree of steatosis were responsible for this discordance. Adjustment of cutoff values for the XL probe (<5.5, 5.5–7, 7–10, and >10 kPa for F0, F1–2, F3, and F4 fibrosis, respectively) significantly improved agreement between the two probes from r = 0.655 to 0.679. Conclusion Liver stiffness can be measured in significantly more obese and nonobese patients using the XL probe than the M probe. However, the XL probe is less accurate and adjusted cutoff values are required. PMID:24696623

  1. Feasibility of a GNSS-Probe for Creating Digital Maps of High Accuracy and Integrity

    NASA Astrophysics Data System (ADS)

    Vartziotis, Dimitris; Poulis, Alkis; Minogiannis, Alexandros; Siozos, Panayiotis; Goudas, Iraklis; Samson, Jaron; Tossaint, Michel

    The “ROADSCANNER” project addresses the need for increased accuracy and integrity Digital Maps (DM) utilizing the latest developments in GNSS, in order to provide the required datasets for novel applications, such as navigation based Safety Applications, Advanced Driver Assistance Systems (ADAS) and Digital Automotive Simulations. The activity covered in the current paper is the feasibility study, preliminary tests, initial product design and development plan for an EGNOS enabled vehicle probe. The vehicle probe will be used for generating high accuracy, high integrity and ADAS compatible digital maps of roads, employing a multiple passes methodology supported by sophisticated refinement algorithms. Furthermore, the vehicle probe will be equipped with pavement scanning and other data fusion equipment, in order to produce 3D road surface models compatible with standards of road-tire simulation applications. The project was assigned to NIKI Ltd under the 1st Call for Ideas in the frame of the ESA - Greece Task Force.

  2. Data Analysis of the Floating Potential Measurement Unit aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Barjatya, Aroh; Swenson, Charles M.; Thompson, Donald C.; Wright, Kenneth H., Jr.

    2009-01-01

    We present data from the Floating Potential Measurement Unit (FPMU), that is deployed on the starboard (S1) truss of the International Space Station. The FPMU is a suite of instruments capable of redundant measurements of various plasma parameters. The instrument suite consists of: a Floating Potential Probe, a Wide-sweeping spherical Langmuir probe, a Narrow-sweeping cylindrical Langmuir Probe, and a Plasma Impedance Probe. This paper gives a brief overview of the instrumentation and the received data quality, and then presents the algorithm used to reduce I-V curves to plasma parameters. Several hours of data is presented from August 5th, 2006 and March 3rd, 2007. The FPMU derived plasma density and temperatures are compared with the International Reference Ionosphere (IRI) and USU-Global Assimilation of Ionospheric Measurement (USU-GAIM) models. Our results show that the derived in-situ density matches the USU-GAIM model better than the IRI, and the derived in-situ temperatures are comparable to the average temperatures given by the IRI.

  3. Slingshot dynamics for self-replicating probes and the effect on exploration timescales

    NASA Astrophysics Data System (ADS)

    Nicholson, Arwen; Forgan, Duncan

    2013-10-01

    Interstellar probes can carry out slingshot manoeuvres around the stars they visit, gaining a boost in velocity by extracting energy from the star's motion around the Galactic Centre. These manoeuvres carry little to no extra energy cost, and in previous work it has been shown that a single Voyager-like probe exploring the Galaxy does so 100 times faster when carrying out these slingshots than when navigating purely by powered flight (Forgan et al. 2012). We expand on these results by repeating the experiment with self-replicating probes. The probes explore a box of stars representative of the local Solar neighbourhood, to investigate how self-replication affects exploration timescales when compared with a single non-replicating probe. We explore three different scenarios of probe behaviour: (i) standard powered flight to the nearest unvisited star (no slingshot techniques used), (ii) flight to the nearest unvisited star using slingshot techniques and (iii) flight to the next unvisited star that will give the maximum velocity boost under a slingshot trajectory. In all three scenarios, we find that as expected, using self-replicating probes greatly reduces the exploration time, by up to three orders of magnitude for scenarios (i) and (iii) and two orders of magnitude for (ii). The second case (i.e. nearest-star slingshots) remains the most time effective way to explore a population of stars. As the decision-making algorithms for the fleet are simple, unanticipated `race conditions' among probes are set up, causing the exploration time of the final stars to become much longer than necessary. From the scaling of the probes' performance with star number, we conclude that a fleet of self-replicating probes can indeed explore the Galaxy in a sufficiently short time to warrant the existence of the Fermi Paradox.

  4. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  5. Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.

    PubMed

    Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias

    2018-05-01

    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.

  6. The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.

  7. Report on the Current Inventory of the Toolbox for Plant Cell Wall Analysis: Proteinaceous and Small Molecular Probes

    PubMed Central

    Rydahl, Maja G.; Hansen, Aleksander R.; Kračun, Stjepan K.; Mravec, Jozef

    2018-01-01

    Plant cell walls are highly complex structures composed of diverse classes of polysaccharides, proteoglycans, and polyphenolics, which have numerous roles throughout the life of a plant. Significant research efforts aim to understand the biology of this cellular organelle and to facilitate cell-wall-based industrial applications. To accomplish this, researchers need to be provided with a variety of sensitive and specific detection methods for separate cell wall components, and their various molecular characteristics in vitro as well as in situ. Cell wall component-directed molecular detection probes (in short: cell wall probes, CWPs) are an essential asset to the plant glycobiology toolbox. To date, a relatively large set of CWPs has been produced—mainly consisting of monoclonal antibodies, carbohydrate-binding modules, synthetic antibodies produced by phage display, and small molecular probes. In this review, we summarize the state-of-the-art knowledge about these CWPs; their classification and their advantages and disadvantages in different applications. In particular, we elaborate on the recent advances in non-conventional approaches to the generation of novel CWPs, and identify the remaining gaps in terms of target recognition. This report also highlights the addition of new “compartments” to the probing toolbox, which is filled with novel chemical biology tools, such as metabolic labeling reagents and oligosaccharide conjugates. In the end, we also forecast future developments in this dynamic field. PMID:29774041

  8. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  9. An algorithm for solving the system-level problem in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.

  10. Accelerated Dimension-Independent Adaptive Metropolis

    DOE PAGES

    Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...

    2016-10-27

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  11. Accelerated Dimension-Independent Adaptive Metropolis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuxin; Keyes, David E.; Law, Kody J.

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  12. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  13. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  14. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  15. No drive line, no seal, no bearing and no wear: magnetics for impeller suspension and flow assessment in a new VAD.

    PubMed

    Huber, Christoph H; Tozzi, Piergiorgio; Hurni, Michel; von Segesser, Ludwig K

    2004-06-01

    The new magnetically suspended axial pump is free of seals, bearings, mechanical friction and wear. In the absence of a drive shaft or flow meter, pump flow assessment is made with an algorithm based on currents required for impeller rotation and stabilization. The aim of this study is to validate pump performance, algorithm-based flow and effective flow. A series of bovine experiments was realized after equipment with pressure transducers, continuous-cardiac-output-catheter, intracardiac ultrasound (AcuNav) over 6 h. Pump implantation was through a median sternotomy (LV-->VAD-->calibrated transonic-flow-probe-->aorta). A transonic-HT311-flow-probe was fixed onto the outflow cannula for flow comparison. Animals were electively sacrificed and at necropsy systematic pump inspection and renal embolus score was realized. Observation period was 340+/-62.4 min. The axial pump generated a mean arterial pressure of 58.8+/-14.3 mmHg (max 117 mmHg) running at a speed of 6591.3+/-1395.4 rev./min (min 5000/max 8500 rev./min) and generating 2.5+/-1.0 l/min (min 1.4/max 6.0 l/min) of flow. Correlation between the results of the pump flow algorithm and measured pump flow was linear (y=1.0339x, R2=0.9357). VAD explants were free of macroscopic thrombi. Renal embolus score was 0+/-0. The magnetically suspended axial flow pump provides excellent left ventricular support. The pump flow algorithm used is accurate and reliable. Therefore, there is no need for direct flow measurement.

  16. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  17. Temporally separating Cherenkov radiation in a scintillator probe exposed to a pulsed X-ray beam.

    PubMed

    Archer, James; Madden, Levi; Li, Enbang; Carolan, Martin; Petasecca, Marco; Metcalfe, Peter; Rosenfeld, Anatoly

    2017-10-01

    Cherenkov radiation is generated in optical systems exposed to ionising radiation. In water or plastic devices, if the incident radiation has components with high enough energy (for example, electrons or positrons with energy greater than 175keV), Cherenkov radiation will be generated. A scintillator dosimeter that collects optical light, guided by optical fibre, will have Cherenkov radiation generated throughout the length of fibre exposed to the radiation field and compromise the signal. We present a novel algorithm to separate Cherenkov radiation signal that requires only a single probe, provided the radiation source is pulsed, such as a linear accelerator in external beam radiation therapy. We use a slow scintillator (BC-444) that, in a constant beam of radiation, reaches peak light output after 1 microsecond, while the Cherenkov signal is detected nearly instantly. This allows our algorithm to separate the scintillator signal from the Cherenkov signal. The relative beam profile and depth dose of a linear accelerator 6MV X-ray field were reconstructed using the algorithm. The optimisation method improved the fit to the ionisation chamber data and improved the reliability of the measurements. The algorithm was able to remove 74% of the Cherenkov light, at the expense of only 1.5% scintillation light. Further characterisation of the Cherenkov radiation signal has the potential to improve the results and allow this method to be used as a simpler optical fibre dosimeter for quality assurance in external beam therapy. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. LIDAR Remote Sensing Concepts

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1997-01-01

    The primary goal of the NASA New Millennium Program (NMP) is to develop technology for use on future operational missions. The Program consists of two thrust areas, one oriented towards developing technologies for Deep Space Probes and one oriented towards developing technology for Earth Observing Probes. Each thrust area intends to fly several technology demonstrator space designated DS-X and EO-X respectively where X is the mission number. Each mission has an approximately $100 million cap on total mission cost. The EO-1 mission has been selected and is under development. The instrument discussed here was submitted by NASA MSFC as a potential candidate for the EO-2 or EO-3 missions due to launch in 2001 and late 2002 or early 2003 respectively. This report summarizes and follows the format of the material provided to NMP.

  19. LIDAR Remote Sensing Concepts

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1997-01-01

    The primary goal of the NASA New Millennium Program (NMP) is to develop technology for use on future operational missions. The Program consists of two thrust areas, one oriented towards developing technologies for Deep Space Probes and one oriented towards developing technology for Earth Observing Probes. Each thrust area intends to fly several technology demonstrator spacecraft designated DS-X and EO-X respectively where X is the mission number. Each mission has an approximately $100 million cap on total mission cost. The EO-1 mission has been selected and is under development. The instrument discussed here was submitted by NASA MSFC as a potential candidate for the EO-2 or EO-3 missions due to launch in 2001 and late 2002 or early 2003 respectively. This report summarizes and follows the format of the material provided to NMP.

  20. Minimal Length Scale Scenarios for Quantum Gravity.

    PubMed

    Hossenfelder, Sabine

    2013-01-01

    We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

  1. Conflict Probe Concepts Analysis in Support of Free Flight

    NASA Technical Reports Server (NTRS)

    Warren, Anthony W.; Schwab, Robert W.; Geels, Timothy J.; Shakarian, Arek

    1997-01-01

    This study develops an operational concept and requirements for en route Free Flight using a simulation of the Cleveland Air Route Traffic Control Center, and develops requirements for an automated conflict probe for use in the Air Traffic Control (ATC) Centers. In this paper, we present the results of simulation studies and summarize implementation concepts and infrastructure requirements to transition from the current air traffic control system to mature Free Right. The transition path to Free Flight envisioned in this paper assumes an orderly development of communications, navigation, and surveillance (CNS) technologies based on results from our simulation studies. The main purpose of this study is to provide an overall context and methodology for evaluating airborne and ground-based requirements for cooperative development of the future ATC system.

  2. Image Guided Biodistribution and Pharmacokinetic Studies of Theranostics

    PubMed Central

    Ding, Hong; Wu, Fang

    2012-01-01

    Image guided technique is playing an increasingly important role in the investigation of the biodistribution and pharmacokinetics of drugs or drug delivery systems in various diseases, especially cancers. Besides anatomical imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), molecular imaging strategy including optical imaging, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) will facilitate the localization and quantization of radioisotope or optical probe labeled nanoparticle delivery systems in the category of theranostics. The quantitative measurement of the bio-distribution and pharmacokinetics of theranostics in the fields of new drug/probe development, diagnosis and treatment process monitoring as well as tracking the brain-blood-barrier (BBB) breaking through by high sensitive imaging method, and the applications of the representative imaging modalities are summarized in this review. PMID:23227121

  3. An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection.

    PubMed

    Xu, Cheng; Liu, Jiamei; Yang, Weifeng; Shu, Yayun; Wei, Zhipeng; Zheng, Weiwei; Feng, Xin; Zhou, Fengfeng

    2018-04-01

    Transcriptomic and methylomic patterns represent two major OMIC data sources impacted by both inheritable genetic information and environmental factors, and have been widely used as disease diagnosis and prognosis biomarkers. Modern transcriptomic and methylomic profiling technologies detect the status of tens of thousands or even millions of probing residues in the human genome, and introduce a major computational challenge for the existing feature selection algorithms. This study proposes a three-step feature selection algorithm, TriVote, to detect a subset of transcriptomic or methylomic residues with highly accurate binary classification performance. TriVote outperforms both filter and wrapper feature selection algorithms with both higher classification accuracy and smaller feature number on 17 transcriptomes and two methylomes. Biological functions of the methylome biomarkers detected by TriVote were discussed for their disease associations. An easy-to-use Python package is also released to facilitate the further applications.

  4. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  5. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    PubMed

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  6. Dynamic phasing of multichannel cw laser radiation by means of a stochastic gradient algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, V A; Volkov, M V; Garanin, S G

    2013-09-30

    The phasing of a multichannel laser beam by means of an iterative stochastic parallel gradient (SPG) algorithm has been numerically and experimentally investigated. The operation of the SPG algorithm is simulated, the acceptable range of amplitudes of probe phase shifts is found, and the algorithm parameters at which the desired Strehl number can be obtained with a minimum number of iterations are determined. An experimental bench with phase modulators based on lithium niobate, which are controlled by a multichannel electronic unit with a real-time microcontroller, has been designed. Phasing of 16 cw laser beams at a system response bandwidth ofmore » 3.7 kHz and phase thermal distortions in a frequency band of about 10 Hz is experimentally demonstrated. The experimental data are in complete agreement with the calculation results. (control of laser radiation parameters)« less

  7. Improving Electronic Sensor Reliability by Robust Outlier Screening

    PubMed Central

    Moreno-Lizaranzu, Manuel J.; Cuesta, Federico

    2013-01-01

    Electronic sensors are widely used in different application areas, and in some of them, such as automotive or medical equipment, they must perform with an extremely low defect rate. Increasing reliability is paramount. Outlier detection algorithms are a key component in screening latent defects and decreasing the number of customer quality incidents (CQIs). This paper focuses on new spatial algorithms (Good Die in a Bad Cluster with Statistical Bins (GDBC SB) and Bad Bin in a Bad Cluster (BBBC)) and an advanced outlier screening method, called Robust Dynamic Part Averaging Testing (RDPAT), as well as two practical improvements, which significantly enhance existing algorithms. Those methods have been used in production in Freescale® Semiconductor probe factories around the world for several years. Moreover, a study was conducted with production data of 289,080 dice with 26 CQIs to determine and compare the efficiency and effectiveness of all these algorithms in identifying CQIs. PMID:24113682

  8. Improving electronic sensor reliability by robust outlier screening.

    PubMed

    Moreno-Lizaranzu, Manuel J; Cuesta, Federico

    2013-10-09

    Electronic sensors are widely used in different application areas, and in some of them, such as automotive or medical equipment, they must perform with an extremely low defect rate. Increasing reliability is paramount. Outlier detection algorithms are a key component in screening latent defects and decreasing the number of customer quality incidents (CQIs). This paper focuses on new spatial algorithms (Good Die in a Bad Cluster with Statistical Bins (GDBC SB) and Bad Bin in a Bad Cluster (BBBC)) and an advanced outlier screening method, called Robust Dynamic Part Averaging Testing (RDPAT), as well as two practical improvements, which significantly enhance existing algorithms. Those methods have been used in production in Freescale® Semiconductor probe factories around the world for several years. Moreover, a study was conducted with production data of 289,080 dice with 26 CQIs to determine and compare the efficiency and effectiveness of all these algorithms in identifying CQIs.

  9. Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.

    PubMed

    Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L

    2011-01-01

    Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.

  10. Antarctic Meteorite Newsletter, volume 8, number 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Preliminary descriptions and classifications of meteorites examined since the July 1984 newsletter are presented. Each macroscopic description summarizes features that were visible to the eye (with, at most, 50X magnification). Each thin section description represents features that were found in a survey-level examination of a polished thin section that was prepared from a small (usually extrior) chip of the meteorite. Classification is based on microscopic petrography and reconnaissance-level electron-probe microanalyses.

  11. Probing the Potential of Electronic Publishing and Computer Software for Personal Use. Report of an Aspen Institute Conference (Queenstown, Maryland, January 12-14, 1988). Communications and Society Forum Report #6.

    ERIC Educational Resources Information Center

    Rice, Michael

    This report summarizes the presentations and discussions at a conference on the development of new products and services in electronic publishing and computer software for personal use. It is noted that the 26 participants came from a variety of perspectives, including the publishing and computer software industries; the fields of home…

  12. Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision

    NASA Astrophysics Data System (ADS)

    Rojer, Alan S.; Schwartz, Eric L.

    1991-02-01

    Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for

  13. Cosmological Parameters and Hyper-Parameters: The Hubble Constant from Boomerang and Maxima

    NASA Astrophysics Data System (ADS)

    Lahav, Ofer

    Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalise this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint likelihood function a set of `Hyper-Parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the Hyper-Parameters, is very simple to implement. We illustrate the method by estimating the Hubble constant H0 from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO, Boomerang and Maxima). The approach can be generalised for a combination of cosmic probes, and for other priors on the Hyper-Parameters. Reference: Lahav, Bridle, Hobson, Lasenby & Sodre, 2000, MNRAS, in press (astro-ph/9912105)

  14. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  15. Comparison of chirped-probe-pulse and hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering for combustion thermometry.

    PubMed

    Richardson, Daniel R; Stauffer, Hans U; Roy, Sukesh; Gord, James R

    2017-04-10

    A comparison is made between two ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry techniques-hybrid femtosecond/picosecond (fs/ps) CARS and chirped-probe-pulse (CPP) fs-CARS-that have become standards for high-repetition-rate thermometry in the combustion diagnostics community. These two variants of fs-CARS differ only in the characteristics of the ps-duration probe pulse; in hybrid fs/ps CARS a spectrally narrow, time-asymmetric probe pulse is used, whereas a highly chirped, spectrally broad probe pulse is used in CPP fs-CARS. Temperature measurements were performed using both techniques in near-adiabatic flames in the temperature range 1600-2400 K and for probe time delays of 0-30 ps. Under these conditions, both techniques are shown to exhibit similar temperature measurement accuracies and precisions to previously reported values and to each other. However, it is observed that initial calibration fits to the spectrally broad CPP results require more fitting parameters and a more robust optimization algorithm and therefore significantly increased computational cost and complexity compared to the fitting of hybrid fs/ps CARS data. The optimized model parameters varied more for the CPP measurements than for the hybrid fs/ps measurements for different experimental conditions.

  16. Machine learning aided diagnosis of hepatic malignancies through in vivo dielectric measurements with microwaves.

    PubMed

    Yilmaz, Tuba; Kılıç, Mahmut Alp; Erdoğan, Melike; Çayören, Mehmet; Tunaoğlu, Doruk; Kurtoğlu, İsmail; Yaslan, Yusuf; Çayören, Hüseyin; Arkan, Akif Enes; Teksöz, Serkan; Cancan, Gülden; Kepil, Nuray; Erdamar, Sibel; Özcan, Murat; Akduman, İbrahim; Kalkan, Tunaya

    2016-06-20

    In the past decade, extensive research on dielectric properties of biological tissues led to characterization of dielectric property discrepancy between the malignant and healthy tissues. Such discrepancy enabled the development of microwave therapeutic and diagnostic technologies. Traditionally, dielectric property measurements of biological tissues is performed with the well-known contact probe (open-ended coaxial probe) technique. However, the technique suffers from limited accuracy and low loss resolution for permittivity and conductivity measurements, respectively. Therefore, despite the inherent dielectric property discrepancy, a rigorous measurement routine with open-ended coaxial probes is required for accurate differentiation of malignant and healthy tissues. In this paper, we propose to eliminate the need for multiple measurements with open-ended coaxial probe for malignant and healthy tissue differentiation by applying support vector machine (SVM) classification algorithm to the dielectric measurement data. To do so, first, in vivo malignant and healthy rat liver tissue dielectric property measurements are collected with open-ended coaxial probe technique between 500 MHz to 6 GHz. Cole-Cole functions are fitted to the measured dielectric properties and measurement data is verified with the literature. Malign tissue classification is realized by applying SVM to the open-ended coaxial probe measurements where as high as 99.2% accuracy (F1 Score) is obtained.

  17. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.

  18. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  19. ALGORITHMS FOR ESTIMATING RESTING METABOLIC RATE AND ACTIVITY SPECIFIC VENTILATION RATES FOR USE IN COMPLEX EXPOSURE AND INTAKE DOSE MODELS

    EPA Science Inventory

    This work summarizes advancements made that allow for better estimation of resting metabolic rate (RMR) and subsequent estimation of ventilation rates (i.e., total ventilation (VE) and alveolar ventilation (VA)) for individuals of both genders and all ages. ...

  20. Optimal tree-stem bucking of northeastern species of China

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Joseph McNeel

    2004-01-01

    An application of optimal tree-stem bucking to the northeastern tree species of China is reported. The bucking procedures used in this region are summarized, which are the basic guidelines for the optimal bucking design. The directed graph approach was adopted to generate the bucking patterns by using the network analysis labeling algorithm. A computer-based bucking...

  1. An Annotated Bibliography of Current Literature Dealing with the Effective Teaching of Computer Programming in High Schools.

    ERIC Educational Resources Information Center

    Taylor, Karen A.

    This review of the literature and annotated bibliography summarizes the available research relating to teaching programming to high school students. It is noted that, while the process of programming a computer could be broken down into five steps--problem definition, algorithm design, code writing, debugging, and documentation--current research…

  2. Two MODIS Aerosol Products Over Ocean on the Terra and Aqua CERES SSF Datasets

    NASA Technical Reports Server (NTRS)

    Ignatov, Alexander; Minnis, Patrick; Loeb, Norman; Wielicki, Bruce; Miller, Walter; Sun-Mack, Sunny; Tanre, Didier; Remer, Lorraine; Laszlo, Istvan; Geier, Erika

    2004-01-01

    Over ocean, two aerosol products are reported on the Terra and Aqua CERES SSFs. Both are derived from MODIS, but using different sampling and aerosol algorithms. This study briefly summarizes these products, and compares using 2 weeks of global Terra data from 15-21 December 2000, and 1-7 June 2001.

  3. ICASE

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.

  4. Sparse representation and Bayesian detection of genome copy number alterations from microarray data.

    PubMed

    Pique-Regi, Roger; Monso-Varona, Jordi; Ortega, Antonio; Seeger, Robert C; Triche, Timothy J; Asgharzadeh, Shahab

    2008-02-01

    Genomic instability in cancer leads to abnormal genome copy number alterations (CNA) that are associated with the development and behavior of tumors. Advances in microarray technology have allowed for greater resolution in detection of DNA copy number changes (amplifications or deletions) across the genome. However, the increase in number of measured signals and accompanying noise from the array probes present a challenge in accurate and fast identification of breakpoints that define CNA. This article proposes a novel detection technique that exploits the use of piece wise constant (PWC) vectors to represent genome copy number and sparse Bayesian learning (SBL) to detect CNA breakpoints. First, a compact linear algebra representation for the genome copy number is developed from normalized probe intensities. Second, SBL is applied and optimized to infer locations where copy number changes occur. Third, a backward elimination (BE) procedure is used to rank the inferred breakpoints; and a cut-off point can be efficiently adjusted in this procedure to control for the false discovery rate (FDR). The performance of our algorithm is evaluated using simulated and real genome datasets and compared to other existing techniques. Our approach achieves the highest accuracy and lowest FDR while improving computational speed by several orders of magnitude. The proposed algorithm has been developed into a free standing software application (GADA, Genome Alteration Detection Algorithm). http://biron.usc.edu/~piquereg/GADA

  5. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  6. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE PAGES

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  7. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  8. A refraction-corrected tomographic algorithm for immersion laser-ultrasonic imaging of solids with piecewise linear surface profile

    NASA Astrophysics Data System (ADS)

    Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.

    2018-05-01

    In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.

  9. The Wide-Field Imaging Interferometry Testbed: Recent Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2006-01-01

    We present recent results from the Wide-Field Imaging Interferometry Testbed (WIIT). The data acquired with the WIIT is "double Fourier" data, including both spatial and spectral information within each data cube. We have been working with this data, and starting to develop algorithms, implementations, and techniques for reducing this data. Such algorithms and tools are of great importance for a number of proposed future missions, including the Space Infrared Interferometric Telescope (SPIRIT), the Submillimeter Probe of the Evolution of Cosmic Structure (SPECS), and the Terrestrial Planet Finder Interferometer (TPF-I)/Darwin. Recent results are discussed and future study directions are described.

  10. Use of Fuzzycones for Sun-Only Attitude Determination: THEMIS Becomes ARTEMIS

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Felikson, Denis; Sedlak, Joseph E.

    2009-01-01

    In order for two THEMIS probes to successfully transition to ARTEMIS it will be necessary to determine attitudes with moderate accuracy using Sun sensor data only. To accomplish this requirement, an implementation of the Fuzzycones maximum likelihood algorithm was developed. The effect of different measurement uncertainty models on Fuzzycones attitude accuracy was investigated and a bin-transition technique was introduced to improve attitude accuracy using data with uniform error distributions. The algorithm was tested with THEMIS data and in simulations. The analysis results show that the attitude requirements can be met using Fuzzycones and data containing two bin-transitions.

  11. Spectral Kinetic Simulation of the Ideal Multipole Resonance Probe

    NASA Astrophysics Data System (ADS)

    Gong, Junbo; Wilczek, Sebastian; Szeremley, Daniel; Oberrath, Jens; Eremin, Denis; Dobrygin, Wladislaw; Schilling, Christian; Friedrichs, Michael; Brinkmann, Ralf Peter

    2015-09-01

    The term Active Plasma Resonance Spectroscopy (APRS) denotes a class of diagnostic techniques which utilize the natural ability of plasmas to resonate on or near the electron plasma frequency ωpe: An RF signal in the GHz range is coupled into the plasma via an electric probe; the spectral response of the plasma is recorded, and a mathematical model is used to determine plasma parameters such as the electron density ne or the electron temperature Te. One particular realization of the method is the Multipole Resonance Probe (MRP). The ideal MRP is a geometrically simplified version of that probe; it consists of two dielectrically shielded, hemispherical electrodes to which the RF signal is applied. A particle-based numerical algorithm is described which enables a kinetic simulation of the interaction of the probe with the plasma. Similar to the well-known particle-in-cell (PIC), it contains of two modules, a particle pusher and a field solver. The Poisson solver determines, with the help of a truncated expansion into spherical harmonics, the new electric field at each particle position directly without invoking a numerical grid. The effort of the scheme scales linearly with the ensemble size N.

  12. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara.

    PubMed

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D; Rothman, Richard E

    2012-09-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in "127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara☆, ☆☆,★

    PubMed Central

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E.; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D.; Rothman, Richard E.

    2012-01-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in “”127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. PMID:22809694

  14. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  15. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  16. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  17. High-NA metrology and sensing on Berkeley MET5

    NASA Astrophysics Data System (ADS)

    Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick

    2017-03-01

    In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.

  18. An Advanced Approach to Simultaneous Monitoring of Multiple Bacteria in Space

    NASA Technical Reports Server (NTRS)

    Eggers, M.

    1998-01-01

    The utility of a novel microarray-based microbial analyzer was demonstrated by the rapid detection, imaging, and identification of a mixture of microorganisms found in a waste water sample from the Lunar-Mars Life Support Test Project through the synergistic combination of: (1) judicious RNA probe selection via algorithms developed by University of Houston scientists; (2) tuned surface chemistries developed by Baylor College of Medicine scientists to facilitate hybridization of rRNA targets to DNA probes under very low salt conditions, thereby minimizing secondary structure; and (3) integration of the microarray printing and detection/imaging instrumentation by Genometrix to complete the quantitative analysis of microorganism mixtures.

  19. Use of the MATRIXx Integrated Toolkit on the Microwave Anisotropy Probe Attitude Control System

    NASA Technical Reports Server (NTRS)

    Ward, David K.; Andrews, Stephen F.; McComas, David C.; ODonnell, James R., Jr.

    1999-01-01

    Recent advances in analytical software tools allow the analysis, simulation, flight code, and documentation of an algorithm to be generated from a single source, all within one integrated analytical design package. NASA's Microwave Anisotropy Probe project has used one such package, Integrated Systems' MATRIXx suite, in the design of the spacecraft's Attitude Control System. The project's experience with the linear analysis, simulation, code generation, and documentation tools will be presented and compared with more traditional development tools. In particular, the quality of the flight software generated will be examined in detail. Finally, lessons learned on each of the tools will be shared.

  20. Screening prostate cancer using a portable near infrared scanning imaging unit with an optical fiber-based rectal probe

    NASA Astrophysics Data System (ADS)

    Pu, Yang; Wang, Wubao; Tang, Guichen; Budansky, Yury; Sharonov, Mikhail; Xu, Min; Achilefu, Samuel; Eastham, James A.; Alfano, Robert R.

    2012-01-01

    A portable near infrared scanning polarization imaging unit with an optical fiber-based rectal probe, namely Photonic Finger, was designed and developed o locate the 3D position of abnormal prostate site inside normal prostate tissue. An inverse algorithm, Optical Tomography using Independent Component Analysis (OPTICA) was improved particularly to unmix the signal from targets (cancerous tissue) embedded in a turbid medium (normal tissue) in the backscattering imaging geometry. Photonic Finger combined with OPTICA was tested to characterize different target(s) inside different tissue medium, including cancerous prostate tissue embedded by large piece of normal tissue.

  1. Face recognition from unconstrained three-dimensional face images using multitask sparse representation

    NASA Astrophysics Data System (ADS)

    Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar

    2018-01-01

    We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.

  2. Revisiting Mitochondrial pH with an Improved Algorithm for Calibration of the Ratiometric 5(6)-carboxy-SNARF-1 Probe Reveals Anticooperative Reaction with H+ Ions and Warrants Further Studies of Organellar pH

    PubMed Central

    Żurawik, Tomasz Michał; Pomorski, Adam; Belczyk-Ciesielska, Agnieszka; Goch, Grażyna; Niedźwiedzka, Katarzyna; Kucharczyk, Róża; Krężel, Artur; Bal, Wojciech

    2016-01-01

    Fluorescence measurements of pH and other analytes in the cell rely on accurate calibrations, but these have routinely used algorithms that inadequately describe the properties of indicators. Here, we have established a more accurate method for calibrating and analyzing data obtained using the ratiometric probe 5(6)-carboxy-SNARF-1. We tested the implications of novel approach to measurements of pH in yeast mitochondria, a compartment containing a small number of free H+ ions. Our findings demonstrate that 5(6)-carboxy-SNARF-1 interacts with H+ ions inside the mitochondria in an anticooperative manner (Hill coefficient n of 0.5) and the apparent pH inside the mitochondria is ~0.5 unit lower than had been generally assumed. This result, at odds with the current consensus on the mechanism of energy generation in the mitochondria, is in better agreement with theoretical considerations and warrants further studies of organellar pH. PMID:27557123

  3. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  4. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    NASA Astrophysics Data System (ADS)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  5. Precise algorithm to generate random sequential adsorption of hard polygons at saturation.

    PubMed

    Zhang, G

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  6. Radiolabeled inorganic nanoparticles for positron emission tomography imaging of cancer: an overview

    PubMed Central

    CHAKRAVARTY, Rubel; GOEL, Shreya; DASH, Ashutosh; CAI, Weibo

    2017-01-01

    Over the last few years, a plethora of radiolabeled inorganic nanoparticles have been developed and evaluated for their potential use as probes in positron emission tomography (PET) imaging of a wide variety of cancers. Inorganic nanoparticles represent an emerging paradigm in molecular imaging probe design, allowing the incorporation of various imaging modalities, targeting ligands, and therapeutic payloads into a single vector. A major challenge in this endeavor is to develop disease-specific nanoparticles with facile and robust radiolabeling strategies. Also, the radiolabeled nanoparticles should demonstrate adequate in vitro and in vivo stability, enhanced sensitivity for detection of disease at an early stage, optimized in vivo pharmacokinetics for reduced non-specific organ uptake, and improved targeting for achieving high efficacy. Owing to these challenges and other technological and regulatory issues, only a single radiolabeled nanoparticle formulation, namely “C-dots” (Cornell dots), has found its way into clinical trials thus far. This review describes the available options for radiolabeling of nanoparticles and summarizes the recent developments in PET imaging of cancer in preclinical and clinical settings using radiolabeled nanoparticles as probes. The key considerations toward clinical translation of these novel PET imaging probes are discussed, which will be beneficial for advancement of the field. PMID:28124549

  7. Drinking Water Supply without Use of a Disinfectant

    NASA Astrophysics Data System (ADS)

    Rajnochova, Marketa; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    The paper focuses on the issue of drinking water supply without use of any disinfectants. Before the public water supply network operator begins to consider switching to operation without use of chemical disinfection, initial assessment should be made, whether or not the water supply system in question is suitable for this type of operation. The assessment is performed by applying the decision algorithm. The initial assessment is followed by another decision algorithm which serves for managing and controlling the process of switching to drinking water supply without use of a disinfectant. The paper also summarizes previous experience and knowledge of this way operated public water supply systems in the Czech Republic.

  8. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  9. Polynomial-Time Algorithms for Building a Consensus MUL-Tree

    PubMed Central

    Cui, Yun; Jansson, Jesper

    2012-01-01

    Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134

  10. Polynomial-time algorithms for building a consensus MUL-tree.

    PubMed

    Cui, Yun; Jansson, Jesper; Sung, Wing-Kin

    2012-09-01

    A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.

  11. Recognition of Protein-coding Genes Based on Z-curve Algorithms

    PubMed Central

    -Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling

    2014-01-01

    Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027

  12. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  13. Probing the interaction between nanoparticles and lipid membranes by quartz crystal microbalance with dissipation monitoring

    NASA Astrophysics Data System (ADS)

    Yousefi, Nariman; Tufenkji, Nathalie

    2016-12-01

    There is increasing interest in using quartz crystal microbalance with dissipation monitoring (QCM-D) to investigate the interaction of nanoparticles (NPs) with model surfaces. The high sensitivity, ease of use and the ability to monitor interactions in real-time has made it a popular technique for colloid chemists, biologists, bioengineers and biophysicists. QCM-D has been recently used to probe the interaction of NPs with supported lipid bilayers (SLBs) as model cell membranes. The interaction of NPs with SLBs is highly influenced by the quality of the lipid bilayers. Unlike many surface sensitive techniques, using QCM-D, the quality of SLBs can be assessed in real-time, hence QCM-D studies on SLB-NP interactions are less prone to the artefacts arising from bilayers that are not well formed. The ease of use and commercial availability of a wide range of sensor surfaces also have made QCM-D a versatile tool for studying NP interactions with lipid bilayers. In this review, we summarize the state-of-the-art on QCM-D based techniques for probing the interactions of NPs with lipid bilayers.

  14. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    NASA Astrophysics Data System (ADS)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  15. Review of TRMM/GPM Rainfall Algorithm Validation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2004-01-01

    A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.

  16. Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2001-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.

  17. Algorithms for searching Fast radio bursts and pulsars in tight binary systems.

    NASA Astrophysics Data System (ADS)

    Zackay, Barak

    2017-01-01

    Fast radio bursts (FRB's) are an exciting, recently discovered, astrophysical transients which their origins are unknown.Currently, these bursts are believed to be coming from cosmological distances, allowing us to probe the electron content on cosmological length scales. Even though their precise localization is crucial for the determination of their origin, radio interferometers were not extensively employed in searching for them due to computational limitations.I will briefly present the Fast Dispersion Measure Transform (FDMT) algorithm,that allows to reduce the operation count in blind incoherent dedispersion by 2-3 orders of magnitude.In addition, FDMT enables to probe the unexplored domain of sub-microsecond astrophysical pulses.Pulsars in tight binary systems are among the most important astrophysical objects as they provide us our best tests of general relativity in the strong field regime.I will provide a preview to a novel algorithm that enables the detection of pulsars in short binary systems using observation times longer than an orbital period.Current pulsar search programs limit their searches for integration times shorter than a few percents of the orbital period.Until now, searching for pulsars in binary systems using observation times longer than an orbital period was considered impossible as one has to blindly enumerate all options for the Keplerian parameters, the pulsar rotation period, and the unknown DM.Using the current state of the art pulsar search techniques and all computers on the earth, such an enumeration would take longer than a Hubble time. I will demonstrate that using the new algorithm, it is possible to conduct such an enumeration on a laptop using real data of the double pulsar PSR J0737-3039.Among the other applications of this algorithm are:1) Searching for all pulsars on all sky positions in gamma ray observations of the Fermi LAT satellite.2) Blind searching for continuous gravitational wave sources emitted by pulsars with non-axis-symmetric matter distribution.Previous attempts to conduct all of the above searches contained substantial sensitivity compromises.

  18. VLA Imaging of Protoplanetary Environments

    NASA Technical Reports Server (NTRS)

    Wilner, David J.

    2004-01-01

    We summarize the major accomplishments of our program to use high angular resolution observations at millimeter wavelengths to probe the structure of protoplanetary disks in nearby regions of star formation. The primary facilities used in this work were the Very Large Array (VLA) of the National Radio Astronomy Observatories (NRAO) located in New Mexico, and the recently upgraded Australia Telescope Compact Array (ATCA), located in Australia (to access sources in the far southern sky). We used these facilities to image thermal emission from dust particles in disks at long millimeter wavelengths, where the emission is optically thin and probes the full disk volume, including the inner regions of planet formation that remain opaque at shorter wavelengths. The best resolution obtained with the VLA is comparable to the size scales of the orbits of giant planets in our Solar System (< 10 AU).

  19. Feshbach Prize: New Phenomena and New Physics from Strongly-Correlated Quantum Matter

    NASA Astrophysics Data System (ADS)

    Carlson, Joseph A.

    2017-01-01

    Strongly correlated quantum matter is ubiquitous in physics from cold atoms to nuclei to the cold dense matter found in neutron stars. Experiments from table-top to the extremely large scale experiments including FRIB and LIGO will help determine the properties of matter across an incredible scale of distances and energies. Questions to be addressed include the existence of exotic states of matter in cold atoms and nuclei, the response of this correlated matter to external probes, and the behavior of matter in extreme astrophysical environments. A more complete understanding is required, both to understand these diverse phenomena and to employ this understanding to probe for new underlying physics in experiments including neutrinoless double beta decay and accelerator neutrino experiments. I will summarize some aspects of our present understanding and highlight several important prospects for the future.

  20. Research on the relationship of the probe system for the swing arm profilometer based on the point source microscope

    NASA Astrophysics Data System (ADS)

    Gao, Mingxing; Jing, Hongwei; Cao, Xuedong; Chen, Lin; Yang, Jie

    2015-08-01

    When using the swing arm profilometer (SAP) to measure the aspheric mirror and the off-axis aspheric mirror, the error of the effective arm length of the SAP has an obvious influence on the measurement result. In order to reduce the influence of the effective arm length and increase the measurement accuracy of the SAP, the laser tracker is adopted to measure the effective arm length. Because the space position relationship of the probe system for the SAP is needed to measured before using the laser tracker, the point source microscope (PSM) is used to measure the space positional relationship. The measurement principle of the PSM and other applications are introduced; the accuracy and repeatability of this technology are analysed; the advantages and disadvantages of this technology are summarized.

  1. Porous plug for Gravity Probe B

    NASA Astrophysics Data System (ADS)

    Wang, Suwen; Everitt, C. W. Francis; Frank, David J.; Lipa, John A.; Muhlfelder, Barry F.

    2015-11-01

    The confinement of superfluid helium for a Dewar in space poses a unique challenge due to its propensity to minimize thermal gradients by essentially viscous-free counterflow. This poses the risk of losing liquid through a vent pipe, reducing the efficiency of the cooling process. To confine the liquid helium in the Gravity Probe B (GP-B) flight Dewar, a porous plug technique was invented at Stanford University. Here, we review the history of the porous plug and its development, and describe the physics underlying its operation. We summarize a few missions that employed porous plugs, some of which preceded the launch of GP-B. The design, manufacture and flight performance of the GP-B plug are described, and its use resulted in the successful operation of the 2441 l flight Dewar on-orbit for 17.3 months.

  2. Advances in the development of an imaging device for plaque measurement in the area of the carotid artery.

    PubMed

    Ličev, Lačezar; Krumnikl, Michal; Škuta, Jaromír; Babiuch, Marek; Farana, Radim

    2014-03-04

    This paper describes the advances in the development and subsequent testing of an imaging device for three-dimensional ultrasound measurement of atherosclerotic plaque in the carotid artery. The embolization from the atherosclerotic carotid plaque is one of the most common causes of ischemic stroke and, therefore, we consider the measurement of the plaque as extremely important. The paper describes the proposed hardware for enhancing the standard ultrasonic probe to provide a possibility of accurate probe positioning and synchronization with the cardiac activity, allowing the precise plaque measurements that were impossible with the standard equipment. The synchronization signal is derived from the output signal of the patient monitor (electrocardiogram (ECG)), processed by a microcontroller-based system, generating the control commands for the linear motion moving the probe. The controlling algorithm synchronizes the movement with the ECG waveform to obtain clear images not disturbed by the heart activity.

  3. Space pruning monotonic search for the non-unique probe selection problem.

    PubMed

    Pappalardo, Elisa; Ozkok, Beyza Ahlatcioglu; Pardalos, Panos M

    2014-01-01

    Identification of targets, generally viruses or bacteria, in a biological sample is a relevant problem in medicine. Biologists can use hybridisation experiments to determine whether a specific DNA fragment, that represents the virus, is presented in a DNA solution. A probe is a segment of DNA or RNA, labelled with a radioactive isotope, dye or enzyme, used to find a specific target sequence on a DNA molecule by hybridisation. Selecting unique probes through hybridisation experiments is a difficult task, especially when targets have a high degree of similarity, for instance in a case of closely related viruses. After preliminary experiments, performed by a canonical Monte Carlo method with Heuristic Reduction (MCHR), a new combinatorial optimisation approach, the Space Pruning Monotonic Search (SPMS) method, is introduced. The experiments show that SPMS provides high quality solutions and outperforms the current state-of-the-art algorithms.

  4. Prototype of a single probe Compton camera for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.

    2017-02-01

    Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.

  5. Quantitative ptychographic reconstruction by applying a probe constraint

    NASA Astrophysics Data System (ADS)

    Reinhardt, J.; Schroer, C. G.

    2018-04-01

    The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.

  6. Analysis of Radiation Damage in Light Water Reactors: Comparison of Cluster Analysis Methods for the Analysis of Atom Probe Data.

    PubMed

    Hyde, Jonathan M; DaCosta, Gérald; Hatzoglou, Constantinos; Weekes, Hannah; Radiguet, Bertrand; Styman, Paul D; Vurpillot, Francois; Pareige, Cristelle; Etienne, Auriane; Bonny, Giovanni; Castin, Nicolas; Malerba, Lorenzo; Pareige, Philippe

    2017-04-01

    Irradiation of reactor pressure vessel (RPV) steels causes the formation of nanoscale microstructural features (termed radiation damage), which affect the mechanical properties of the vessel. A key tool for characterizing these nanoscale features is atom probe tomography (APT), due to its high spatial resolution and the ability to identify different chemical species in three dimensions. Microstructural observations using APT can underpin development of a mechanistic understanding of defect formation. However, with atom probe analyses there are currently multiple methods for analyzing the data. This can result in inconsistencies between results obtained from different researchers and unnecessary scatter when combining data from multiple sources. This makes interpretation of results more complex and calibration of radiation damage models challenging. In this work simulations of a range of different microstructures are used to directly compare different cluster analysis algorithms and identify their strengths and weaknesses.

  7. Automated general temperature correction method for dielectric soil moisture sensors

    NASA Astrophysics Data System (ADS)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.

  8. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  9. Specific identification of human papillomavirus type in cervical smears and paraffin sections by in situ hybridization with radioactive probes: a preliminary communication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, J.; Gendelman, H.E.; Naghashfar, Z.

    1985-01-01

    Cervical Papanicolaou smears and paraffin sections of biopsy specimens obtained from women attending dysplasia clinics were examined for viral DNA sequences by in situ hybridization technique using TVS-labeled cloned recombinant DNA probes of human papillomavirus (HPV) types 6, 11, and 16. These and one unrelated DNA probe complementary to measles virus RNA were labeled by nick translation using either one or two TVS-labeled nucleotides. Paraffin sections and cervical smears were collected on pretreated slides, hybridized with the probes under stringent or nonstringent conditions for 50 h, and autoradiographed. Additional cervical specimens from the same women were examined for the presencemore » of genus-specific papillomavirus capsid antigen by the immunoperoxidase technique. Preliminary results may be summarized as follows. The infecting virus could be identified in smears as well as in sections. Viral DNA sequences were detected only when there were condylomatous cells in the specimen and in only a proportion of the condylomatous cells. Even under stringent conditions, some specimens reacted with both HPV-6 and HPV-11. In some instances, the cells did not hybridize with any of the three probes even when duplicate specimens contained frankly condylomatous, capsid antigen-positive cells. In situ hybridization of Papanicolaou smears or of tissue sections is a practical method for diagnosis and follow-up of specific papillomavirus infection using routinely collected material.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nabeel A. Riza

    The goals of the Year 2006 Continuation Phase 2 three months period (April 1 to Sept. 30) of this project were to (a) conduct a probe elements industrial environment feasibility study and (b) fabricate embedded optical phase or microstructured SiC chips for individual gas species sensing. Specifically, SiC chips for temperature and pressure probe industrial applications were batch fabricated. Next, these chips were subject to a quality test for use in the probe sensor. A batch of the best chips for probe design were selected and subject to further tests that included sensor performance based on corrosive chemical exposure, powermore » plant soot exposure, light polarization variations, and extreme temperature soaking. Experimental data were investigated in detail to analyze these mentioned industrial parameters relevant to a power plant. Probe design was provided to overcome mechanical vibrations. All these goals have been achieved and are described in detail in the report. The other main focus of the reported work is to modify the SiC chip by fabricating an embedded optical phase or microstructures within the chip to enable gas species sensing under high temperature and pressure. This has been done in the Kar UCF Lab. using a laser-based system whose design and operation is explained. Experimental data from the embedded optical phase-based chip for changing temperatures is provided and shown to be isolated from gas pressure and species. These design and experimentation results are summarized to give positive conclusions on the proposed high temperature high pressure gas species detection optical sensor technology.« less

  11. Integrated miniature fluorescent probe to leverage the sensing potential of ZnO quantum dots for the detection of copper (II) ions.

    PubMed

    Ng, Sing Muk; Wong, Derrick Sing Nguong; Phung, Jane Hui Chiun; Chin, Suk Fun; Chua, Hong Siang

    2013-11-15

    Quantum dots are fluorescent semiconductor nanoparticles that can be utilised for sensing applications. This paper evaluates the ability to leverage their analytical potential using an integrated fluorescent sensing probe that is portable, cost effective and simple to handle. ZnO quantum dots were prepared using the simple sol-gel hydrolysis method at ambient conditions and found to be significantly and specifically quenched by copper (II) ions. This ZnO quantum dots system has been incorporated into an in-house developed miniature fluorescent probe for the detection of copper (II) ions in aqueous medium. The probe was developed using a low power handheld black light as excitation source and three photo-detectors as sensor. The sensing chamber placed between the light source and detectors was made of 4-sided clear quartz windows. The chamber was housed within a dark compartment to avoid stray light interference. The probe was operated using a microcontroller (Arduino Uno Revision 3) that has been programmed with the analytical response and the working algorithm of the electronics. The probe was sourced with a 12 V rechargeable battery pack and the analytical readouts were given directly using a LCD display panel. Analytical optimisations of the ZnO quantum dots system and the probe have been performed and further described. The probe was found to have a linear response range up to 0.45 mM (R(2)=0.9930) towards copper (II) ion with a limit of detection of 7.68×10(-7) M. The probe has high repeatable and reliable performance. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Analysis of a non-storm time enhancement in outer belt electrons

    NASA Astrophysics Data System (ADS)

    Schiller, Q.; Li, X.; Godinez, H. C.; Sarris, T. E.; Tu, W.; Malaspina, D.; Turner, D. L.; Blake, J. B.; Koller, J.

    2014-12-01

    A high-speed solar wind stream impacted Earth's magnetosphere on January 13th, 2013, and is associated with a large enhancement (>2.5 orders) of outer radiation belt electron fluxes despite a small Dst signature (-30 nT). Fortunately, the outer belt was well sampled by a variety of missions during the event, including the Van Allen Probes, THEMIS, and the Colorado Student Space Weather Experiment (CSSWE). In-situ flux and phase space density observations are used from MagEIS (Magnetic Electron Ion Spectrometer) onboard the Van Allen Probes, REPTile (Relativistic Electron and Proton Telescope integrated little experiment) onboard CSSWE, and SST onboard THEMIS. The observations show a rapid increase in 100's keV electron fluxes, followed by a more gradual enhancement of the MeV energies. The 100's keV enhancement is associated with a substorm injection, and the futher energization to MeV energies is associated with wave activity as measured by the Van Allen Probes and THEMIS. Furthermore, the phase space density radial profiles show an acceleration region occurring between 5

  13. Rotorcraft Brownout: Advanced Understanding, Control and Mitigation

    DTIC Science & Technology

    2008-12-31

    the Gauss Seidel iterative method . The overall steps of SIMPLER algorithm can be summarized as: 1. Guess velocity field, 2. Calculate the momentum...techniques and numerical methods , and the team will begin to develop a methodology that is capable of integrating these solutions and highlighting...rotorcraft design optimization techniques will then be undertaken using the validated computational methods . 15. SUBJECT TERMS Rotorcraft

  14. GSFC Technology Development Center Report

    NASA Technical Reports Server (NTRS)

    Himwich, Ed; Gipson, John

    2013-01-01

    This report summarizes the activities of the GSFC Technology Development Center (TDC) for 2012 and forecasts planned activities for 2013. The GSFC TDC develops station software including the Field System (FS), scheduling software (SKED), hardware including tools for station timing and meteorology, scheduling algorithms, and operational procedures. It provides a pool of individuals to assist with station implementation, check-out, upgrades, and training.

  15. Scheduling language and algorithm development study. Volume 1: Study summary and overview

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A high level computer programming language and a program library were developed to be used in writing programs for scheduling complex systems such as the space transportation system. The objectives and requirements of the study are summarized and unique features of the specified language and program library are described and related to the why of the objectives and requirements.

  16. Magnetic Resonance Poroelastography: An Algorithm for Estimating the Mechanical Properties of Fluid-Saturated Soft Tissues

    PubMed Central

    Perriñez, Phillip R.; Kennedy, Francis E.; Van Houten, Elijah E. W.; Weaver, John B.; Paulsen, Keith D.

    2010-01-01

    Magnetic Resonance Poroelastography (MRPE) is introduced as an alternative to single-phase model-based elastographic reconstruction methods. A three-dimensional (3D) finite element poroelastic inversion algorithm was developed to recover the mechanical properties of fluid-saturated tissues. The performance of this algorithm was assessed through a variety of numerical experiments, using synthetic data to probe its stability and sensitivity to the relevant model parameters. Preliminary results suggest the algorithm is robust in the presence of noise and capable of producing accurate assessments of the underlying mechanical properties in simulated phantoms. Further, a 3D time-harmonic motion field was recorded for a poroelastic phantom containing a single cylindrical inclusion and used to assess the feasibility of MRPE image reconstruction from experimental data. The elastograms obtained from the proposed poroelastic algorithm demonstrate significant improvement over linearly elastic MRE images generated using the same data. In addition, MRPE offers the opportunity to estimate the time-harmonic pressure field resulting from tissue excitation, highlighting the potential for its application in the diagnosis and monitoring of disease processes associated with changes in interstitial pressure. PMID:20199912

  17. Determination of propranolol hydrochloride in pharmaceutical preparations using near infrared spectrometry with fiber optic probe and multivariate calibration methods.

    PubMed

    Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson

    2015-01-01

    A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g(-1) was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm(-1)). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe.

  18. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Cheriyadat, Anil M; Bhaduri, Budhendra L

    The high rate of urbanization, political conflicts and ensuing internal displacement of population, and increased poverty in the 20th century has resulted in rapid increase of informal settlements. These unplanned, unauthorized, and/or unstructured homes, known as informal settlements, shantytowns, barrios, or slums, pose several challenges to the nations, as these settlements are often located in most hazardous regions and lack basic services. Though several World Bank and United Nations sponsored studies stress the importance of poverty maps in designing better policies and interventions, mapping slums of the world is a daunting and challenging task. In this paper, we summarize ourmore » ongoing research on settlement mapping through the utilization of Very high resolution (VHR) remote sensing imagery. Most existing approaches used to classify VHR images are single instance (or pixel-based) learning algorithms, which are inadequate for analyzing VHR imagery, as single pixels do not contain sufficient contextual information (see Figure 1). However, much needed spatial contextual information can be captured via feature extraction and/or through newer machine learning algorithms in order to extract complex spatial patterns that distinguish informal settlements from formal ones. In recent years, we made significant progress in advancing the state of art in both directions. This paper summarizes these results.« less

  20. A diagnostic algorithm for atypical spitzoid tumors: guidelines for immunohistochemical and molecular assessment.

    PubMed

    Cho-Vega, Jeong Hee

    2016-07-01

    Atypical spitzoid tumors are a morphologically diverse group of rare melanocytic lesions most frequently seen in children and young adults. As atypical spitzoid tumors bear striking resemblance to Spitz nevus and spitzoid melanomas clinically and histopathologically, it is crucial to determine its malignant potential and predict its clinical behavior. To date, many researchers have attempted to differentiate atypical spitzoid tumors from unequivocal melanomas based on morphological, immonohistochemical, and molecular diagnostic differences. A diagnostic algorithm is proposed here to assess the malignant potential of atypical spitzoid tumors by using a combination of immunohistochemical and cytogenetic/molecular tests. Together with classical morphological evaluation, this algorithm includes a set of immunohistochemistry assays (p16(Ink4a), a dual-color Ki67/MART-1, and HMB45), fluorescence in situ hybridization (FISH) with five probes (6p25, 8q24, 11q13, CEN9, and 9p21), and an array-based comparative genomic hybridization. This review discusses details of the algorithm, the rationale of each test used in the algorithm, and utility of this algorithm in routine dermatopathology practice. This algorithmic approach will provide a comprehensive diagnostic tool that complements conventional histological criteria and will significantly contribute to improve the diagnosis and prediction of the clinical behavior of atypical spitzoid tumors.

  1. Exploring the Solar System with Stellar Occultations

    NASA Technical Reports Server (NTRS)

    Elliot, J. L.; Dunham, E. W.

    1984-01-01

    By recording the light intensity as a function of time when a planet occults a relatively bright star, the thermal structure of the upper atmosphere of the planet can be probed. The main feature of stellar occultation observations is their high spatial resolution, typically several thousand times better than the resolution achievable with ground-based imaging. Five stellar occultations have been observed. The main results of these observations are summarized. Stellar occultations have been observed on Uranus, Mars, Pallas, Neptune and the Jovian Ring.

  2. Dual PET and Near-Infrared Fluorescence Imaging Probes as Tools for Imaging in Oncology

    PubMed Central

    An, Fei-Fei; Chan, Mark; Kommidi, Harikrishna; Ting, Richard

    2016-01-01

    OBJECTIVE The purpose of this article is to summarize advances in PET fluorescence resolution, agent design, and preclinical imaging that make a growing case for clinical PET fluorescence imaging. CONCLUSION Existing SPECT, PET, fluorescence, and MRI contrast imaging techniques are already deeply integrated into the management of cancer, from initial diagnosis to the observation and management of metastases. Combined positron-emitting fluorescent contrast agents can convey new or substantial benefits that improve on these proven clinical contrast agents. PMID:27223168

  3. Diverse Molecular Targets for Chalcones with Varied Bioactivities

    PubMed Central

    Zhou, Bo; Xing, Chengguo

    2015-01-01

    Natural or synthetic chalcones with different substituents have revealed a variety of biological activities that may benefit human health. The underlying mechanisms of action, particularly with respect to the direct cellular targets and the modes of interaction with the targets, have not been rigorously characterized, which imposes challenges to structure-guided rational development of therapeutic agents or chemical probes with acceptable target-selectivity profile. This review summarizes literature evidence on chalcones’ direct molecular targets in the context of their biological activities. PMID:26798565

  4. Dark matter in the coming decade: Complementary paths to discovery and beyond

    DOE PAGES

    Bauer, Daniel; Buckley, James; Cahill-Rowley, Matthew; ...

    2015-05-27

    Here, we summarize the many dark matter searches currently being pursued through four complementary approaches: direct detection, indirect detection, collider experiments, and astrophysical probes. The essential features of broad classes of experiments are described, each with their own strengths and weaknesses. Furthermore, we discuss the complementarity of the different dark matter searches qualitatively and illustrated quantitatively in two simple theoretical frameworks. Our primary conclusion is that the diversity of possible dark matter candidates requires a balanced program drawing from all four approaches.

  5. Project Summaries, 1989 - 1990

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Student designs summarized here include two undergraduate space designs and five graduate space designs from fall 1989, plus four undergraduate space designs and four undergraduate aircraft designs from spring 1990. Progress in a number of programs is described. The Geostationary Satellite Servicing Facility, the Lunar Farside Observatory and Science Base, the Texas Educational Satellite, an asteroid rendezvous vehicle, a Titan probe, a subsystems commonality assessment for lunar/Mars landers, a nuclear-thermal rocket propelled Earth-Mars vehicle, and a comprehensive orbital debris management program are among the topics discussed.

  6. The Emerging Population of Pulsar Wind Nebulae in Hard X-rays

    NASA Astrophysics Data System (ADS)

    Mattana, F.; Götz, D.; Terrier, R.; Renaud, M.; Falanga, M.

    2009-05-01

    The hard X-ray synchrotron emission from Pulsar Wind Nebulae probes energetic particles, closely related to the pulsar injection power at the present time. INTEGRAL has disclosed the yet poorly known population of hard X-ray pulsar/PWN systems. We summarize the properties of the class, with emphasys on the first hard X-ray bow-shock (CTB 80 powered by PSR B1951+32), and highlight some prospects for the study of Pulsar Wind Nebulae with the Simbol-X mission.

  7. Direct coupling of tomography and ptychography

    DOE PAGES

    Gürsoy, Doğa

    2017-08-09

    We present a generalization of the ptychographic phase problem for recovering refractive properties of a three-dimensional object in a tomography setting. Our approach, which ignores the lateral overlapping probe requirements in existing ptychography algorithms, can enable the reconstruction of objects using highly flexible acquisition patterns and pave the way for sparse and rapid data collection with lower radiation exposure.

  8. Embedded Reasoning Supporting Aerospace IVHM

    DTIC Science & Technology

    2007-01-01

    c method (BIT or health assessment algorithm) which the monitoring diagnostic relies on input information tics and Astronautics In the diagram...viewing of the current health state of all monitored subsystems, while also providing a means to probe deeper in the event anomalous operation is...seeks to integrate detection , diagnostic, and prognostic capabilities with a hierarchical diagnostic reasoning architecture into a single

  9. Use of Collocated KWAJEX Satellite, Aircraft, and Ground Measurements for Understanding Ambiguities in TRMM Radiometer Rain Profile Algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Fiorino, Steven

    2002-01-01

    Coordinated ground, aircraft, and satellite observations are analyzed from the 1999 TRMM Kwajalein Atoll field experiment (KWAJEX) to better understand the relationships between cloud microphysical processes and microwave radiation intensities in the context of physical evaluation of the Level 2 TRMM radiometer rain profile algorithm and uncertainties with its assumed microphysics-radiation relationships. This talk focuses on the results of a multi-dataset analysis based on measurements from KWAJEX surface, air, and satellite platforms to test the hypothesis that uncertainties in the passive microwave radiometer algorithm (TMI 2a12 in the nomenclature of TRMM) are systematically coupled and correlated with the magnitudes of deviation of the assumed 3-dimensional microphysical properties from observed microphysical properties. Re-stated, this study focuses on identifying the weaknesses in the operational TRMM 2a12 radiometer algorithm based on observed microphysics and radiation data in terms of over-simplifications used in its theoretical microphysical underpinnings. The analysis makes use of a common transform coordinate system derived from the measuring capabilities of the aircraft radiometer used to survey the experimental study area, i.e., the 4-channel AMPR radiometer flown on the NASA DC-8 aircraft. Normalized emission and scattering indices derived from radiometer brightness temperatures at the four measuring frequencies enable a 2-dimensional coordinate system that facilities compositing of Kwajalein S-band ground radar reflectivities, ARMAR Ku-band aircraft radar reflectivities, TMI spacecraft radiometer brightness temperatures, PR Ku-band spacecraft radar reflectivities, bulk microphysical parameters derived from the aircraft-mounted cloud microphysics laser probes (including liquid/ice water contents, effective liquid/ice hydrometeor radii, and effective liquid/ice hydrometeor variances), and rainrates derived from any of the individual ground, aircraft, or satellite algorithms applied to the radar or radiometer measurements, or their combination. The results support the study's underlying hypothesis, particularly in context of ice phase processes, in that the cloud regions where the 2a12 algorithm's microphysical database most misrepresents the microphysical conditions as determined by the laser probes, are where retrieved surface rainrates are most erroneous relative to other reference rainrates as determined by ground and aircraft radar. In reaching these conclusions, TMI and PR brightness temperatures and reflectivities have been synthesized from the aircraft AMPR and ARMAR measurements with the analysis conducted in a composite framework to eliminate measurement noise associated with the case study approach and single element volumes obfuscated by heterogeneous beam filling effects. In diagnosing the performance of the 2a12 algorithm, weaknesses have been found in the cloud-radiation database used to provide microphysical guidance to the algorithm for upper cloud ice microphysics. It is also necessary to adjust a fractional convective rainfall factor within the algorithm somewhat arbitrarily to achieve satisfactory algorithm accuracy.

  10. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, G.

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  11. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE PAGES

    Zhang, G.

    2018-04-30

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  12. Polynomial interpretation of multipole vectors

    NASA Astrophysics Data System (ADS)

    Katz, Gabriel; Weeks, Jeff

    2004-09-01

    Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.

  13. Synopsis of moisture monitoring by neutron probe in the unsaturated zone at Area G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vold, E.

    1997-12-31

    Moisture profiles from neutron probe data provide valuable information in site characterization and to supplement ground water monitoring efforts. The neutron probe precision error (reproducibility) is found to be about 0.2 vol% under in situ field conditions where the slope in moisture content with depth is varying slowly. This error is about 2 times larger near moisture spikes (e.g., at the vapor phase notch), due to the sensitivity of the probe response to vertical position errors on the order of 0.5 inches. Calibrations were performed to correct the downhole probe response to the volumetric moisture content determined on core samples.more » Calibration is sensitive to borehole diameter and casing type, requiring 3 separate calibration relations for the boreholes surveyed here. Power law fits were used for calibration in this study to assure moisture content results greater than zero. Findings in the boreholes reported here confirm the broad features seen previously in moisture profiles at Area G, a near-surface region with large moisture variability, a very dry region at greater depths, and a moisture spike at the vapor phase notch (VPN). This feature is located near the interface between the vitrified and vitrified stratigraphic units and near the base of the mesa. This report describes the in-field calibration methods used for the neutron moisture probe measurements and summarizes preliminary results of the monitoring program in the in-situ monitoring network at Area G. Reported results include three main areas: calibration studies, profiles from each of the vertical boreholes at Area G, and time-dependent variations in a select subset of boreholes. Results are reported here for the vertical borehole network. Results from the horizontal borehole network will be described when available.« less

  14. Reconstruction software of the silicon tracker of DAMPE mission

    NASA Astrophysics Data System (ADS)

    Tykhonov, A.; Gallo, V.; Wu, X.; Zimmer, S.

    2017-10-01

    DAMPE is a satellite-borne experiment aimed to probe astroparticle physics in the GeV-TeV energy range. The Silicon tracker (STK) is one of the key components of DAMPE, which allows the reconstruction of trajectories (tracks) of detected particles. The non-negligible amount of material in the tracker poses a challenge to its reconstruction and alignment. In this paper we describe methods to address this challenge. We present the track reconstruction algorithm and give insight into the alignment algorithm. We also present our CAD-to-GDML converter, an in-house tool for implementing detector geometry in the software from the CAD drawings of the detector.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Day, David Minot; Mitchell, Scott A.

    This report summarizes the Combinatorial Algebraic Topology: software, applications & algorithms workshop (CAT Workshop). The workshop was sponsored by the Computer Science Research Institute of Sandia National Laboratories. It was organized by CSRI staff members Scott Mitchell and Shawn Martin. It was held in Santa Fe, New Mexico, August 29-30. The CAT Workshop website has links to some of the talk slides and other information, http://www.cs.sandia.gov/CSRI/Workshops/2009/CAT/index.html. The purpose of the report is to summarize the discussions and recap the sessions. There is a special emphasis on technical areas that are ripe for further exploration, and the plans for follow-up amongstmore » the workshop participants. The intended audiences are the workshop participants, other researchers in the area, and the workshop sponsors.« less

  16. EOS Laser Atmosphere Wind Sounder (LAWS) investigation

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In this final report, the set of tasks that evolved from the Laser Atmosphere Wind Sounder (LAWS) Science Team are reviewed, the major accomplishments are summarized, and a complete set of resulting references provided. The tasks included preparation of a plan for the LAWS Algorithm Development and Evolution Laboratory (LADEL); participation in the preparation of a joint CNES/NASA proposal to build a space-based DWL; involvement in the Global Backscatter Experiments (GLOBE); evaluation of several DWL concepts including 'Quick-LAWS', SPNDL and several direct detection technologies; and an extensive series of system trade studies and Observing System Simulation Experiments (OSSE's). In this report, some of the key accomplishments are briefly summarized with reference to interim reports, special reports, conference/workshop presentations, and publications.

  17. Computerized detection of breast cancer using resonance-frequency-based electrical impedance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Fan, Ming; Zhao, Weijie; Zheng, Bin; Li, Lihua

    2017-03-01

    This study developed and tested a multi-probe resonance-frequency-based electrical impedance spectroscopy (REIS) system aimed at detection of breast cancer. The REIS system consists of specially designed mechanical supporting device that can be easily lifted to fit women of different height, a seven probe sensor cup, and a computer providing software for system control and management. The sensor cup includes one central probe for direct contact with the nipple, and other six probes uniformly distributed at a distance of 35mm away from the center probe to enable contact with breast skin surface. It takes about 18 seconds for this system to complete a data acquisition process. We utilized this system for examination of breast cancer, collecting a dataset of 289 cases including biopsy verified 74 malignant and 215 benign tumors. After that, 23 REIS based features, including seven frequency, fifteen magnitude features were extracted, and an age feature. To reduce redundancy we selected 6 features using the evolutionary algorithm for classification. The area under a receiver operating characteristic curve (AUC) was computed to assess classifier performance. A multivariable logistic regression method was performed for detection of the tumors. The results of our study showed for the 23 REIS features AUC and ACC, Sensitivity and Specificity of 0.796, 0.727, 0.731 and 0.726, respectively. The AUC and ACC, Sensitivity and Specificity for the 6 REIS features of 0.840, 0.80, 0.703 and 0.833, respectively, and AUC of 0.662 and 0.619 for the frequency and magnitude based REIS features, respectively. The performance of the classifiers using all the 6 features was significantly better than solely using magnitude features (p=3.29e-08) and frequency features (5.61e-07). Smote algorithm was used to expand small samples to balance the dataset, the AUC after data balance of 0.846 increased than the original data classification performance. The results indicated that the REIS system is a promising tool for detection of breast cancer and may be acceptable for clinical implementation.

  18. Estimating summary statistics for electronic health record laboratory data for use in high-throughput phenotyping algorithms

    PubMed Central

    Elhadad, N.; Claassen, J.; Perotte, R.; Goldstein, A.; Hripcsak, G.

    2018-01-01

    We study the question of how to represent or summarize raw laboratory data taken from an electronic health record (EHR) using parametric model selection to reduce or cope with biases induced through clinical care. It has been previously demonstrated that the health care process (Hripcsak and Albers, 2012, 2013), as defined by measurement context (Hripcsak and Albers, 2013; Albers et al., 2012) and measurement patterns (Albers and Hripcsak, 2010, 2012), can influence how EHR data are distributed statistically (Kohane and Weber, 2013; Pivovarov et al., 2014). We construct an algorithm, PopKLD, which is based on information criterion model selection (Burnham and Anderson, 2002; Claeskens and Hjort, 2008), is intended to reduce and cope with health care process biases and to produce an intuitively understandable continuous summary. The PopKLD algorithm can be automated and is designed to be applicable in high-throughput settings; for example, the output of the PopKLD algorithm can be used as input for phenotyping algorithms. Moreover, we develop the PopKLD-CAT algorithm that transforms the continuous PopKLD summary into a categorical summary useful for applications that require categorical data such as topic modeling. We evaluate our methodology in two ways. First, we apply the method to laboratory data collected in two different health care contexts, primary versus intensive care. We show that the PopKLD preserves known physiologic features in the data that are lost when summarizing the data using more common laboratory data summaries such as mean and standard deviation. Second, for three disease-laboratory measurement pairs, we perform a phenotyping task: we use the PopKLD and PopKLD-CAT algorithms to define high and low values of the laboratory variable that are used for defining a disease state. We then compare the relationship between the PopKLD-CAT summary disease predictions and the same predictions using empirically estimated mean and standard deviation to a gold standard generated by clinical review of patient records. We find that the PopKLD laboratory data summary is substantially better at predicting disease state. The PopKLD or PopKLD-CAT algorithms are not meant to be used as phenotyping algorithms, but we use the phenotyping task to show what information can be gained when using a more informative laboratory data summary. In the process of evaluation our method we show that the different clinical contexts and laboratory measurements necessitate different statistical summaries. Similarly, leveraging the principle of maximum entropy we argue that while some laboratory data only have sufficient information to estimate a mean and standard deviation, other laboratory data captured in an EHR contain substantially more information than can be captured in higher-parameter models. PMID:29369797

  19. Estimating summary statistics for electronic health record laboratory data for use in high-throughput phenotyping algorithms.

    PubMed

    Albers, D J; Elhadad, N; Claassen, J; Perotte, R; Goldstein, A; Hripcsak, G

    2018-02-01

    We study the question of how to represent or summarize raw laboratory data taken from an electronic health record (EHR) using parametric model selection to reduce or cope with biases induced through clinical care. It has been previously demonstrated that the health care process (Hripcsak and Albers, 2012, 2013), as defined by measurement context (Hripcsak and Albers, 2013; Albers et al., 2012) and measurement patterns (Albers and Hripcsak, 2010, 2012), can influence how EHR data are distributed statistically (Kohane and Weber, 2013; Pivovarov et al., 2014). We construct an algorithm, PopKLD, which is based on information criterion model selection (Burnham and Anderson, 2002; Claeskens and Hjort, 2008), is intended to reduce and cope with health care process biases and to produce an intuitively understandable continuous summary. The PopKLD algorithm can be automated and is designed to be applicable in high-throughput settings; for example, the output of the PopKLD algorithm can be used as input for phenotyping algorithms. Moreover, we develop the PopKLD-CAT algorithm that transforms the continuous PopKLD summary into a categorical summary useful for applications that require categorical data such as topic modeling. We evaluate our methodology in two ways. First, we apply the method to laboratory data collected in two different health care contexts, primary versus intensive care. We show that the PopKLD preserves known physiologic features in the data that are lost when summarizing the data using more common laboratory data summaries such as mean and standard deviation. Second, for three disease-laboratory measurement pairs, we perform a phenotyping task: we use the PopKLD and PopKLD-CAT algorithms to define high and low values of the laboratory variable that are used for defining a disease state. We then compare the relationship between the PopKLD-CAT summary disease predictions and the same predictions using empirically estimated mean and standard deviation to a gold standard generated by clinical review of patient records. We find that the PopKLD laboratory data summary is substantially better at predicting disease state. The PopKLD or PopKLD-CAT algorithms are not meant to be used as phenotyping algorithms, but we use the phenotyping task to show what information can be gained when using a more informative laboratory data summary. In the process of evaluation our method we show that the different clinical contexts and laboratory measurements necessitate different statistical summaries. Similarly, leveraging the principle of maximum entropy we argue that while some laboratory data only have sufficient information to estimate a mean and standard deviation, other laboratory data captured in an EHR contain substantially more information than can be captured in higher-parameter models. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Research progress on quantum informatics and quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Yusheng

    2018-03-01

    Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.

  1. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  2. The comparison and analysis of extracting video key frame

    NASA Astrophysics Data System (ADS)

    Ouyang, S. Z.; Zhong, L.; Luo, R. Q.

    2018-05-01

    Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.

  3. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  4. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  5. Introduction to big bang nucleosynthesis and modern cosmology

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Kusakabe, Motohiko; Kajino, Toshitaka

    Primordial nucleosynthesis remains as one of the pillars of modern cosmology. It is the testing ground upon which many cosmological models must ultimately rest. It is our only probe of the universe during the important radiation-dominated epoch in the first few minutes of cosmic expansion. This paper reviews the basic equations of space-time, cosmology, and big bang nucleosynthesis. We also summarize the current state of observational constraints on primordial abundances along with the key nuclear reactions and their uncertainties. We summarize which nuclear measurements are most crucial during the big bang. We also review various cosmological models and their constraints. In particular, we analyze the constraints that big bang nucleosynthesis places upon the possible time variation of fundamental constants, along with constraints on the nature and origin of dark matter and dark energy, long-lived supersymmetric particles, gravity waves, and the primordial magnetic field.

  6. A case study of learning writing in service-learning through CMC

    NASA Astrophysics Data System (ADS)

    Li, Yunxiang; Ren, LiLi; Liu, Xiaomian; Song, Yinjie; Wang, Jie; Li, Jiaxin

    2011-06-01

    Computer-mediated communication ( CMC ) through online has developed successfully with its adoption by educators. Service Learning is a teaching and learning strategy that integrates community service with academic instruction and reflection to enrich students further understanding of course content, meet genuine community needs, develop career-related skills, and become responsible citizens. This study focuses on an EFL writing learning via CMC in an online virtual environment of service places by taking the case study of service Learning to probe into the scoring algorithm in CMC. The study combines the quantitative and qualitative research to probe into the practical feasibility and effectiveness of EFL writing learning via CMC in service learning in China.

  7. A wireless handheld probe with spectrally constrained evolution strategies for diffuse optical imaging of tissue

    NASA Astrophysics Data System (ADS)

    Flexman, M. L.; Kim, H. K.; Stoll, R.; Khalil, M. A.; Fong, C. J.; Hielscher, A. H.

    2012-03-01

    We present a low-cost, portable, wireless diffuse optical imaging device. The handheld device is fast, portable, and can be applied to a wide range of both static and dynamic imaging applications including breast cancer, functional brain imaging, and peripheral artery disease. The continuous-wave probe has four near-infrared wavelengths and uses digital detection techniques to perform measurements at 2.3 Hz. Using a multispectral evolution algorithm for chromophore reconstruction, we can measure absolute oxygenated and deoxygenated hemoglobin concentration as well as scattering in tissue. Performance of the device is demonstrated using a series of liquid phantoms comprised of Intralipid®, ink, and dye.

  8. Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice

    NASA Astrophysics Data System (ADS)

    Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge

    2007-07-01

    Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.

  9. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  10. NASA Smart Surgical Probe Project

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Andrews, Russell J.; Jeffrey, Stefanie S.; Guerrero, Michael; Papasin, Richard; Koga, Dennis (Technical Monitor)

    2002-01-01

    Information Technologies being developed by NASA to assist astronaut-physician in responding to medical emergencies during long space flights are being employed for the improvement of women's health in the form of "smart surgical probe". This technology, initially developed for neurosurgery applications, not only has enormous potential for the diagnosis and treatment of breast cancer, but broad applicability to a wide range of medical challenges. For the breast cancer application, the smart surgical probe is being designed to "see" a suspicious lump, determine by its features if it is cancerous, and ultimately predict how the disease may progress. A revolutionary early breast cancer detection tool based on this technology has been developed by a commercial company and is being tested in human clinical trials at the University of California at Davis, School of Medicine. The smart surgical probe technology makes use of adaptive intelligent software (hybrid neural networks/fuzzy logic algorithms) with the most advanced physiologic sensors to provide real-time in vivo tissue characterization for the detection, diagnosis and treatment of tumors, including determination of tumor microenvironment and evaluation of tumor margins. The software solutions and tools from these medical applications will lead to the development of better real-time minimally-invasive smart surgical probes for emergency medical care and treatment of astronauts on long space flights.

  11. Atom probe trajectory mapping using experimental tip shape measurements.

    PubMed

    Haley, D; Petersen, T; Ringer, S P; Smith, G D W

    2011-11-01

    Atom probe tomography is an accurate analytical and imaging technique which can reconstruct the complex structure and composition of a specimen in three dimensions. Despite providing locally high spatial resolution, atom probe tomography suffers from global distortions due to a complex projection function between the specimen and detector which is different for each experiment and can change during a single run. To aid characterization of this projection function, this work demonstrates a method for the reverse projection of ions from an arbitrary projection surface in 3D space back to an atom probe tomography specimen surface. Experimental data from transmission electron microscopy tilt tomography are combined with point cloud surface reconstruction algorithms and finite element modelling to generate a mapping back to the original tip surface in a physically and experimentally motivated manner. As a case study, aluminium tips are imaged using transmission electron microscopy before and after atom probe tomography, and the specimen profiles used as input in surface reconstruction methods. This reconstruction method is a general procedure that can be used to generate mappings between a selected surface and a known tip shape using numerical solutions to the electrostatic equation, with quantitative solutions to the projection problem readily achievable in tens of minutes on a contemporary workstation. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  12. Secondary structure prediction and structure-specific sequence analysis of single-stranded DNA.

    PubMed

    Dong, F; Allawi, H T; Anderson, T; Neri, B P; Lyamichev, V I

    2001-08-01

    DNA sequence analysis by oligonucleotide binding is often affected by interference with the secondary structure of the target DNA. Here we describe an approach that improves DNA secondary structure prediction by combining enzymatic probing of DNA by structure-specific 5'-nucleases with an energy minimization algorithm that utilizes the 5'-nuclease cleavage sites as constraints. The method can identify structural differences between two DNA molecules caused by minor sequence variations such as a single nucleotide mutation. It also demonstrates the existence of long-range interactions between DNA regions separated by >300 nt and the formation of multiple alternative structures by a 244 nt DNA molecule. The differences in the secondary structure of DNA molecules revealed by 5'-nuclease probing were used to design structure-specific probes for mutation discrimination that target the regions of structural, rather than sequence, differences. We also demonstrate the performance of structure-specific 'bridge' probes complementary to non-contiguous regions of the target molecule. The structure-specific probes do not require the high stringency binding conditions necessary for methods based on mismatch formation and permit mutation detection at temperatures from 4 to 37 degrees C. Structure-specific sequence analysis is applied for mutation detection in the Mycobacterium tuberculosis katG gene and for genotyping of the hepatitis C virus.

  13. SUPER-RESOLUTION ULTRASOUND TOMOGRAPHY: A PRELIMINARY STUDY WITH A RING ARRAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HUANG, LIANJIE; SIMONETTI, FRANCESCO; DURIC, NEBOJSA

    2007-01-18

    Ultrasound tomography attempts to retrieve the structure of an objective by exploiting the interaction of acoustic waves with the object. A fundamental limit of ultrasound tomography is that features cannot be resolved if they are spaced less than {lambda}/2 apart, where {lambda} is wavelength of the probing wave, regardless of the degree of accuracy of the measurements. Therefore, since the attenuation of the probing wave with propagation distance increases as {lambda} decreases, resolution has to be traded against imaging depth. Recently, it has been shown that the {lambda}/2 limit is a consequence of the Born approximation (implicit in the imagingmore » algorithms currently employed) which neglects the distortion of the probing wavefield as it travels through the medium to be imaged. On the other hand, such a distortion, which is due to the multiple scattering phenomenon, can encode unlimited resolution in the radiating component of the scattered field. Previously, a resolution better than {lambda}/3 has been reported in these proceedings [F. Simonetti, pp. 126 (2006)] in the case of elastic wave probing. In this paper, they demonstrate experimentally a resolution better than {lambda}/4 for objects immersed in a water bth probed by means of a ring array which excites and detects pressure waves in a full view configuration.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less

  15. Transcultural Endocrinology: Adapting Type-2 Diabetes Guidelines on a Global Scale.

    PubMed

    Nieto-Martínez, Ramfis; González-Rivas, Juan P; Florez, Hermes; Mechanick, Jeffrey I

    2016-12-01

    Type-2 diabetes (T2D) needs to be prevented and treated effectively to reduce its burden and consequences. White papers, such as evidence-based clinical practice guidelines (CPG) and their more portable versions, clinical practice algorithms and clinical checklists, may improve clinical decision-making and diabetes outcomes. However, CPG are underused and poorly validated. Protocols that translate and implement these CPG are needed. This review presents the global dimension of T2D, details the importance of white papers in the transculturalization process, compares relevant international CPG, analyzes cultural variables, and summarizes translation strategies that can improve care. Specific protocols and algorithmic tools are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  17. Tachycardia detection in ICDs by Boston Scientific : Algorithms, pearls, and pitfalls.

    PubMed

    Zanker, Norbert; Schuster, Diane; Gilkerson, James; Stein, Kenneth

    2016-09-01

    The aim of this study was to summarize how implantable cardioverter defibrillators (ICDs) by Boston Scientific sense, detect, discriminate rhythms, and classify episodes. Modern devices include multiple programming selections, diagnostic features, therapy options, memory functions, and device-related history features. Device operation includes logical steps from sensing, detection, discrimination, therapy delivery to history recording. The program is designed to facilitate the application of the device algorithms to the individual patient's clinical needs. Features and functions described in this article represent a selective excerpt by the authors from Boston Scientific publicly available product resources. Programming of ICDs may affect patient outcomes. Patient-adapted and optimized programming requires understanding of device operation and concepts.

  18. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  19. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    PubMed

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Management of patients infected with airborne-spread diseases: an algorithm for infection control professionals.

    PubMed

    Rebmann, Terri

    2005-12-01

    Many US hospitals lack the capacity to house safely a surge of potentially infectious patients, increasing the risk of secondary transmission. Respiratory protection and negative-pressure rooms are needed to prevent transmission of airborne-spread diseases, but US hospitals lack available and/or properly functioning negative-pressure rooms. Creating new rooms or retrofitting existing facilities is time-consuming and expensive. Safe methods of managing patients with airborne-spread diseases and establishing temporary negative-pressure and/or protective environments were determined by a literature review. Relevant data were analyzed and synthesized to generate a response algorithm. Ideal patient management and placement guidelines, including instructions for choosing respiratory protection and creating temporary negative-pressure or other protective environments, were delineated. Findings were summarized in a treatment algorithm. The threat of bioterrorism and emerging infections increases health care's need for negative-pressure and/or protective environments. The algorithm outlines appropriate response steps to decrease transmission risk until an ideal protective environment can be utilized. Using this algorithm will prepare infection control professionals to respond more effectively during a surge of potentially infectious patients following a bioterrorism attack or emerging infectious disease outbreak.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Hornback, Donald Eric; Johnson, Jeffrey O.

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  3. IonRayTrace: An HF Propagation Model for Communications and Radar Applications

    DTIC Science & Technology

    2014-12-01

    for modeling the impact of ionosphere variability on detection algorithms. Modification of IonRayTrace’s source code to include flexible gridding and...color denotes plasma frequency in MHz .................................................................. 6 4. Ionospheric absorption (dB) versus... Ionosphere for its environmental background [3]. IonRayTrace’s operation is summarized briefly in Section 3. However, the scope of this document is primarily

  4. Structural assembly in space

    NASA Technical Reports Server (NTRS)

    Stokes, J. W.; Pruett, E. C.

    1980-01-01

    A cost algorithm for predicting assembly costs for large space structures is given. Assembly scenarios are summarized which describe the erection, deployment, and fabrication tasks for five large space structures. The major activities that impact total costs for structure assembly from launch through deployment and assembly to scientific instrument installation and checkout are described. Individual cost elements such as assembly fixtures, handrails, or remote minipulators are also presented.

  5. Reduced Kalman Filters for Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    2011-01-01

    This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.

  6. Causal diagrams and multivariate analysis III: confound it!

    PubMed

    Jupiter, Daniel C

    2015-01-01

    This commentary concludes my series concerning inclusion of variables in multivariate analyses. We take up the issues of confounding and effect modification and summarize the work we have thus far done. Finally, we provide a rough algorithm to help guide us through the maze of possibilities that we have outlined. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  7. The mid-IR and near-IR interferometry of AGNs: key results and their implications

    NASA Astrophysics Data System (ADS)

    Kishimoto, M.

    2015-09-01

    Infrared interferometry has been very productive in directly probing the structure of AGNs at sub-pc scales. With tens of objects already probed in the mid-IR and near-IR, I will summarize the key results and im- plications from this direct exploration. The Keck interferometry in the near-IR and VLTI in the mid-IR shaped the luminosity dependence of the torus size and structure, while the latter also revealed an equatorial structure at several Rsub (dust sublimation radius), and a polar-elongated region at a few tens of Rsub. Notably, this polar component seems to dominate the compact mid-IR flux. This component can persuasively be attributed to a polar outflow. However, interferometry, through emissivity estimations, also indicates that it is not a UV-optically-thin cloud but participating in the obscuration of the nucleus. I will discuss how to accommodate all these facts to build a consistent picture.

  8. Unraveling wall conditioning effects on plasma facing components in NSTX-U with the Materials Analysis Particle Probe (MAPP)

    DOE PAGES

    Bedoya, F.; Allain, J. P.; Kaita, R.; ...

    2016-07-14

    A novel PFC diagnostic, the Materials Analysis Particle Probe (MAPP), has been recently commissioned in the National Spherical Torus Experiment Upgrade (NSTX-U). MAPP is currently monitoring the chemical evolution of the PFCs in the NSTX-U lower divertor at 107 cm from the tokamak axis on a day-to-day basis. Here in this work, we summarize the methodology that was adopted to obtain qualitative and quantitative descriptions of the samples chemistry. Using this methodology, we were able to describe all the features in all our spectra to within a standard deviation of ±0.22 eV in position and ±248 s -1 eV inmore » area. Additionally, we provide an example of this methodology with data of boronized ATJ graphite exposed to NSTX-U plasmas.« less

  9. Probing the fermionic Higgs portal at lepton colliders

    DOE PAGES

    Fedderke, Michael A.; Lin, Tongyan; Wang, Lian -Tao

    2016-04-26

    Here, we study the sensitivity of future electron-positron colliders to UV completions of the fermionic Higgs portal operator H †Hχ¯χ. Measurements of precision electroweak S and T parameters and the e +e – → Zh cross-section at the CEPC, FCC-ee, and ILC are considered. The scalar completion of the fermionic Higgs portal is closely related to the scalar Higgs portal, and we summarize existing results. We devote the bulk of our analysis to a singlet-doublet fermion completion. Assuming the doublet is sufficiently heavy, we construct the effective field theory (EFT) at dimension-6 in order to compute contributions to the observables.more » We also provide full one-loop results for S and T in the general mass parameter space. In both completions, future precision measurements can probe the new states at the (multi-)TeV scale, beyond the direct reach of the LHC.« less

  10. Probing the fermionic Higgs portal at lepton colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedderke, Michael A.; Lin, Tongyan; Wang, Lian -Tao

    Here, we study the sensitivity of future electron-positron colliders to UV completions of the fermionic Higgs portal operator H †Hχ¯χ. Measurements of precision electroweak S and T parameters and the e +e – → Zh cross-section at the CEPC, FCC-ee, and ILC are considered. The scalar completion of the fermionic Higgs portal is closely related to the scalar Higgs portal, and we summarize existing results. We devote the bulk of our analysis to a singlet-doublet fermion completion. Assuming the doublet is sufficiently heavy, we construct the effective field theory (EFT) at dimension-6 in order to compute contributions to the observables.more » We also provide full one-loop results for S and T in the general mass parameter space. In both completions, future precision measurements can probe the new states at the (multi-)TeV scale, beyond the direct reach of the LHC.« less

  11. Application of atomic force microscopy to microbial surfaces: from reconstituted cell surface layers to living cells.

    PubMed

    Dufrêne, Y F

    2001-02-01

    The application of atomic force microscopy (AFM) to probe the ultrastructure and physical properties of microbial cell surfaces is reviewed. The unique capabilities of AFM can be summarized as follows: imaging surface topography with (sub)nanometer lateral resolution; examining biological specimens under physiological conditions; measuring local properties and interaction forces. AFM is being used increasingly for: (i) visualizing the surface ultrastructure of microbial cell surface layers, including bacterial S-layers, purple membranes, porin OmpF crystals and fungal rodlet layers; (ii) monitoring conformational changes of individual membrane proteins; (iii) examining the morphology of bacterial biofilms, (iv) revealing the nanoscale structure of living microbial cells, including fungi, yeasts and bacteria, (v) mapping interaction forces at microbial surfaces, such as van der Waals and electrostatic forces, solvation forces, and steric/bridging forces; and (vi) probing the local mechanical properties of cell surface layers and of single cells.

  12. Empirical study of seven data mining algorithms on different characteristics of datasets for biomedical classification applications.

    PubMed

    Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi

    2017-11-02

    Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.

  13. Study of a high-resolution PET system using a Silicon detector probe

    NASA Astrophysics Data System (ADS)

    Brzeziński, K.; Oliver, J. F.; Gillam, J.; Rafecas, M.

    2014-10-01

    A high-resolution silicon detector probe, in coincidence with a conventional PET scanner, is expected to provide images of higher quality than those achievable using the scanner alone. Spatial resolution should improve due to the finer pixelization of the probe detector, while increased sensitivity in the probe vicinity is expected to decrease noise. A PET-probe prototype is being developed utilizing this principle. The system includes a probe consisting of ten layers of silicon detectors, each a 80 × 52 array of 1 × 1 × 1 mm3 pixels, to be operated in coincidence with a modern clinical PET scanner. Detailed simulation studies of this system have been performed to assess the effect of the additional probe information on the quality of the reconstructed images. A grid of point sources was simulated to study the contribution of the probe to the system resolution at different locations over the field of view (FOV). A resolution phantom was used to demonstrate the effect on image resolution for two probe positions. A homogeneous source distribution with hot and cold regions was used to demonstrate that the localized improvement in resolution does not come at the expense of the overall quality of the image. Since the improvement is constrained to an area close to the probe, breast imaging is proposed as a potential application for the novel geometry. In this sense, a simplified breast phantom, adjacent to heart and torso compartments, was simulated and the effect of the probe on lesion detectability, through measurements of the local contrast recovery coefficient-to-noise ratio (CNR), was observed. The list-mode ML-EM algorithm was used for image reconstruction in all cases. As expected, the point spread function of the PET-probe system was found to be non-isotropic and vary with position, offering improvement in specific regions. Increase in resolution, of factors of up to 2, was observed in the region close to the probe. Images of the resolution phantom showed visible improvement in resolution when including the probe in the simulations. The image quality study demonstrated that contrast and spill-over ratio in other areas of the FOV were not sacrificed for this enhancement. The CNR study performed on the breast phantom indicates increased lesion detectability provided by the probe.

  14. Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations

    NASA Astrophysics Data System (ADS)

    Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.

    2015-12-01

    Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil Moisture, AMSR2, SMAP, L-Band.

  15. LSST Probes of Dark Energy: New Energy vs New Gravity

    NASA Astrophysics Data System (ADS)

    Bradshaw, Andrew; Tyson, A.; Jee, M. J.; Zhan, H.; Bard, D.; Bean, R.; Bosch, J.; Chang, C.; Clowe, D.; Dell'Antonio, I.; Gawiser, E.; Jain, B.; Jarvis, M.; Kahn, S.; Knox, L.; Newman, J.; Wittman, D.; Weak Lensing, LSST; LSS Science Collaborations

    2012-01-01

    Is the late time acceleration of the universe due to new physics in the form of stress-energy or a departure from General Relativity? LSST will measure the shape, magnitude, and color of 4x109 galaxies to high S/N over 18,000 square degrees. These data will be used to separately measure the gravitational growth of mass structure and distance vs redshift to unprecedented precision by combining multiple probes in a joint analysis. Of the five LSST probes of dark energy, weak gravitational lensing (WL) and baryon acoustic oscillation (BAO) probes are particularly effective in combination. By measuring the 2-D BAO scale in ugrizy-band photometric redshift-selected samples, LSST will determine the angular diameter distance to a dozen redshifts with sub percent-level errors. Reconstruction of the WL shear power spectrum on linear and weakly non-linear scales, and of the cross-correlation of shear measured in different photometric redshift bins provides a constraint on the evolution of dark energy that is complementary to the purely geometric measures provided by supernovae and BAO. Cross-correlation of the WL shear and BAO signal within redshift shells minimizes the sensitivity to systematics. LSST will also detect shear peaks, providing independent constraints. Tomographic study of the shear of background galaxies as a function of redshift allows a geometric test of dark energy. To extract the dark energy signal and distinguish between the two forms of new physics, LSST will rely on accurate stellar point-spread functions (PSF) and unbiased reconstruction of galaxy image shapes from hundreds of exposures. Although a weighted co-added deep image has high S/N, it is a form of lossy compression. Bayesian forward modeling algorithms can in principle use all the information. We explore systematic effects on shape measurements and present tests of an algorithm called Multi-Fit, which appears to avoid PSF-induced shear systematics in a computationally efficient way.

  16. Development of an upwind, finite-volume code with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Molvik, Gregory A.

    1995-01-01

    Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data. This report summarizes the research that took place from August 1,1994 to January 1, 1995.

  17. Automatic identification of comparative effectiveness research from Medline citations to support clinicians’ treatment information needs

    PubMed Central

    Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo

    2014-01-01

    Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677

  18. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  19. Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo

    2010-01-01

    The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.

  20. Gene selection heuristic algorithm for nutrigenomics studies.

    PubMed

    Valour, D; Hue, I; Grimard, B; Valour, B

    2013-07-15

    Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.

  1. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  2. An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.

    1999-01-01

    A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.

  3. Cubesat Application for Planetary Entry (CAPE) Missions: Micro-Return Capsule (MIRCA)

    NASA Technical Reports Server (NTRS)

    Esper, Jaime

    2016-01-01

    The Cubesat Application for Planetary Entry Missions (CAPE) concept describes a high-performing Cubesat system which includes a propulsion module and miniaturized technologies capable of surviving atmospheric entry heating, while reliably transmitting scientific and engineering data. The Micro Return Capsule (MIRCA) is CAPE's first planetary entry probe flight prototype. Within this context, this paper briefly describes CAPE's configuration and typical operational scenario, and summarizes ongoing work on the design and basic aerodynamic characteristics of the prototype MIRCA vehicle. CAPE not only opens the door to new planetary mission capabilities, it also offers relatively low-cost opportunities especially suitable to university participation. In broad terms, CAPE consists of two main functional components: the "service module" (SM), and "CAPE's entry probe" (CEP). The SM contains the subsystems necessary to support vehicle targeting (propulsion, ACS, computer, power) and the communications capability to relay data from the CEP probe to an orbiting "mother-ship". The CEP itself carries the scientific instrumentation capable of measuring atmospheric properties (such as density, temperature, composition), and embedded engineering sensors for Entry, Descent, and Landing (EDL). The first flight of MIRCA was successfully completed on 10 October 2015 as a "piggy-back" payload onboard a NASA stratospheric balloon launched from Ft. Sumner, NM.

  4. Texas A&M University in the JET Collaboration - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fries, Rainer; Ko, Che-Ming

    This final report summarizes the work done by PIs at Texas A&M University within the JET Topical Collaboration. The main focus of the group at Texas A&M has been the development and implementation of a hadronization model suitable to calculate hadronization of jet showers in heavy ion collisions event by event. The group successfully developed a hybrid model of parton recombination and remnant string fragmentation including recombination with thermal partons. A code realizing this model was developed and shared with other JET members. In addition, the group at Texas A&M worked on both open and hidden heavy flavor probes. Inmore » particular, they developed a description of heavy flavor hadronization based on recombination, and consistent with in-medium scattering rates of heavy quarks, and suggested the D s meson as a precise probe of the hadronization mechanism. Another noteworthy focus of their work was electromagnetic probes, in particular, dileptons and photons from interactions of jets with the medium. In the soft sector the group has made several contributions to modern topics, e.g. the splitting of elliptic flow between isospin partners and the role of the initial strong gluon fields.« less

  5. Probing the Interaction between Nanoparticles and Lipid Membranes by Quartz Crystal Microbalance with Dissipation Monitoring

    PubMed Central

    Yousefi, Nariman; Tufenkji, Nathalie

    2016-01-01

    There is increasing interest in using quartz crystal microbalance with dissipation monitoring (QCM-D) to investigate the interaction of nanoparticles (NPs) with model surfaces. The high sensitivity, ease of use and the ability to monitor interactions in real-time has made it a popular technique for colloid chemists, biologists, bioengineers, and biophysicists. QCM-D has been recently used to probe the interaction of NPs with supported lipid bilayers (SLBs) as model cell membranes. The interaction of NPs with SLBs is highly influenced by the quality of the lipid bilayers. Unlike many surface sensitive techniques, by using QCM-D, the quality of SLBs can be assessed in real-time, hence QCM-D studies on SLB-NP interactions are less prone to the artifacts arising from bilayers that are not well formed. The ease of use and commercial availability of a wide range of sensor surfaces also have made QCM-D a versatile tool for studying NP interactions with lipid bilayers. In this review, we summarize the state-of-the-art on QCM-D based techniques for probing the interactions of NPs with lipid bilayers. PMID:27995125

  6. A fast response miniature probe for wet steam flow field measurements

    NASA Astrophysics Data System (ADS)

    Bosdas, Ilias; Mansour, Michel; Kalfas, Anestis I.; Abhari, Reza S.

    2016-12-01

    Modern steam turbines require operational flexibility due to renewable energies’ increasing share of the electrical grid. Additionally, the continuous increase in energy demand necessitates efficient design of the steam turbines as well as power output augmentation. The long turbine rotor blades at the machines’ last stages are prone to mechanical vibrations and as a consequence time-resolved experimental data under wet steam conditions are essential for the development of large-scale low-pressure steam turbines. This paper presents a novel fast response miniature heated probe for unsteady wet steam flow field measurements. The probe has a tip diameter of 2.5 mm, and a miniature heater cartridge ensures uncontaminated pressure taps from condensed water. The probe is capable of providing the unsteady flow angles, total and static pressure as well as the flow Mach number. The operating principle and calibration procedure are described in the current work and a detailed uncertainty analysis demonstrates the capability of the new probe to perform accurate flow field measurements under wet steam conditions. In order to exclude any data possibly corrupted by droplets’ impact or evaporation from the heating process, a filtering algorithm was developed and implemented in the post-processing phase of the measured data. In the last part of this paper the probe is used in an experimental steam turbine test facility and measurements are conducted at the inlet and exit of the last stage with an average wetness mass fraction of 8.0%.

  7. A Bayesian Estimate of the CMB-Large-scale Structure Cross-correlation

    NASA Astrophysics Data System (ADS)

    Moura-Santos, E.; Carvalho, F. C.; Penna-Lima, M.; Novaes, C. P.; Wuensche, C. A.

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs-Wolfe (ISW) effect, I.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB-LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe (WMAP9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  8. Morphological spot counting from stacked images for automated analysis of gene copy numbers by fluorescence in situ hybridization.

    PubMed

    Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli

    2002-01-01

    Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.

  9. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  10. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection.

    PubMed

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  11. GST-PRIME: an algorithm for genome-wide primer design.

    PubMed

    Leister, Dario; Varotto, Claudio

    2007-01-01

    The profiling of mRNA expression based on DNA arrays has become a powerful tool to study genome-wide transcription of genes in a number of organisms. GST-PRIME is a software package created to facilitate large-scale primer design for the amplification of probes to be immobilized on arrays for transcriptome analyses, even though it can be also applied in low-throughput approaches. GST-PRIME allows highly efficient, direct amplification of gene-sequence tags (GSTs) from genomic DNA (gDNA), starting from annotated genome or transcript sequences. GST-PRIME provides a customer-friendly platform for automatic primer design, and despite the relative simplicity of the algorithm, experimental tests in the model plant species Arabidopsis thaliana confirmed the reliability of the software. This chapter describes the algorithm used for primer design, its input and output files, and the installation of the standalone package and its use.

  12. High-dimensional cluster analysis with the Masked EM Algorithm

    PubMed Central

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  13. Development of a Two-Wheel Contingency Mode for the MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on mission to the Cosmic Background Explorer (COBE), and is currently collecting data from its orbit near the second Sun-Earth libration point. Due to limited mass, power, and financial resources, a traditional reliability concept including fully redundant components was not feasible for MAP. Instead, the MAP design employs selective hardware redundancy in tandem with contingency software modes and algorithms to improve the odds of mission success. One direction for such improvement has been the development of a two-wheel backup control strategy. This strategy would allow MAP to position itself for maneuvers and collect science data should one of its three reaction wheels fail. Along with operational considerations, the strategy includes three new control algorithms. These algorithms would use the remaining attitude control actuators-thrusters and two reaction wheels-in ways that achieve control goals while minimizing adverse impacts on the functionality of other subsystems and software.

  14. GPU Accelerated Chemical Similarity Calculation for Compound Library Comparison

    PubMed Central

    Ma, Chao; Wang, Lirong; Xie, Xiang-Qun

    2012-01-01

    Chemical similarity calculation plays an important role in compound library design, virtual screening, and “lead” optimization. In this manuscript, we present a novel GPU-accelerated algorithm for all-vs-all Tanimoto matrix calculation and nearest neighbor search. By taking advantage of multi-core GPU architecture and CUDA parallel programming technology, the algorithm is up to 39 times superior to the existing commercial software that runs on CPUs. Because of the utilization of intrinsic GPU instructions, this approach is nearly 10 times faster than existing GPU-accelerated sparse vector algorithm, when Unity fingerprints are used for Tanimoto calculation. The GPU program that implements this new method takes about 20 minutes to complete the calculation of Tanimoto coefficients between 32M PubChem compounds and 10K Active Probes compounds, i.e., 324G Tanimoto coefficients, on a 128-CUDA-core GPU. PMID:21692447

  15. A systematic review of validated methods for identifying hypersensitivity reactions other than anaphylaxis (fever, rash, and lymphadenopathy), using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Preliminary Analysis of Double Shell Tomography Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V

    2009-01-16

    In this project we have collaborated with LLNL scientists Dr. Peer-Timo Bremer while performing our research work on algorithmic solutions for geometric processing, image segmentation and data streaming. The main deliverable has been a 3D viewer for high-resolution imaging data with particular focus on the presentation of orthogonal slices of the double shell tomography dataset. Basic probing capabilities allow querying single voxels in the data to study in detail the information presented to the user and compensate for the intrinsic filtering and imprecision due to visualization based on colormaps. On the algorithmic front we have studied the possibility of usingmore » of non-local means filtering algorithm to achieve noise removal from tomography data. In particular we have developed a prototype that implements an accelerated version of the algorithm that may be able to take advantage of the multi-resolution sub-sampling of the ViSUS format. We have achieved promising results. Future plans include the full integration of the non-local means algorithm in the ViSUS frameworks and testing if the accelerated method will scale properly from 2D images to 3D tomography data.« less

  17. UniPROBE, update 2015: new tools and content for the online database of protein-binding microarray data on protein-DNA interactions.

    PubMed

    Hume, Maxwell A; Barrera, Luis A; Gisselbrecht, Stephen S; Bulyk, Martha L

    2015-01-01

    The Universal PBM Resource for Oligonucleotide Binding Evaluation (UniPROBE) serves as a convenient source of information on published data generated using universal protein-binding microarray (PBM) technology, which provides in vitro data about the relative DNA-binding preferences of transcription factors for all possible sequence variants of a length k ('k-mers'). The database displays important information about the proteins and displays their DNA-binding specificity data in terms of k-mers, position weight matrices and graphical sequence logos. This update to the database documents the growth of UniPROBE since the last update 4 years ago, and introduces a variety of new features and tools, including a new streamlined pipeline that facilitates data deposition by universal PBM data generators in the research community, a tool that generates putative nonbinding (i.e. negative control) DNA sequences for one or more proteins and novel motifs obtained by analyzing the PBM data using the BEEML-PBM algorithm for motif inference. The UniPROBE database is available at http://uniprobe.org. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Random walks of colloidal probes in viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2014-04-01

    To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.

  19. An integrated approach to piezoactuator positioning in high-speed atomic force microscope imaging

    NASA Astrophysics Data System (ADS)

    Yan, Yan; Wu, Ying; Zou, Qingze; Su, Chanmin

    2008-07-01

    In this paper, an integrated approach to achieve high-speed atomic force microscope (AFM) imaging of large-size samples is proposed, which combines the enhanced inversion-based iterative control technique to drive the piezotube actuator control for lateral x-y axis positioning with the use of a dual-stage piezoactuator for vertical z-axis positioning. High-speed, large-size AFM imaging is challenging because in high-speed lateral scanning of the AFM imaging at large size, large positioning error of the AFM probe relative to the sample can be generated due to the adverse effects—the nonlinear hysteresis and the vibrational dynamics of the piezotube actuator. In addition, vertical precision positioning of the AFM probe is even more challenging (than the lateral scanning) because the desired trajectory (i.e., the sample topography profile) is unknown in general, and the probe positioning is also effected by and sensitive to the probe-sample interaction. The main contribution of this article is the development of an integrated approach that combines advanced control algorithm with an advanced hardware platform. The proposed approach is demonstrated in experiments by imaging a large-size (50μm ) calibration sample at high-speed (50Hz scan rate).

  20. Evaluation of software sensors for on-line estimation of culture conditions in an Escherichia coli cultivation expressing a recombinant protein.

    PubMed

    Warth, Benedikt; Rajkai, György; Mandenius, Carl-Fredrik

    2010-05-03

    Software sensors for monitoring and on-line estimation of critical bioprocess variables have mainly been used with standard bioreactor sensors, such as electrodes and gas analyzers, where algorithms in the software model have generated the desired state variables. In this article we propose that other on-line instruments, such as NIR probes and on-line HPLC, should be used to make more reliable and flexible software sensors. Five software sensor architectures were compared and evaluated: (1) biomass concentration from an on-line NIR probe, (2) biomass concentration from titrant addition, (3) specific growth rate from titrant addition, (4) specific growth rate from the NIR probe, and (5) specific substrate uptake rate and by-product rate from on-line HPLC and NIR probe signals. The software sensors were demonstrated on an Escherichia coli cultivation expressing a recombinant protein, green fluorescent protein (GFP), but the results could be extrapolated to other production organisms and product proteins. We conclude that well-maintained on-line instrumentation (hardware sensors) can increase the potential of software sensors. This would also strongly support the intentions with process analytical technology and quality-by-design concepts. 2010 Elsevier B.V. All rights reserved.

Top