Science.gov

Sample records for algorithm enables accurate

  1. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  4. Genomic-enabled prediction with classification algorithms.

    PubMed

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-06-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait-environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  5. Genomic-enabled prediction with classification algorithms

    PubMed Central

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-01-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait–environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  6. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  7. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  8. Cas9-chromatin binding information enables more accurate CRISPR off-target prediction

    PubMed Central

    Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar

    2015-01-01

    The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770

  9. A spectrally accurate algorithm for electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Ganesh, M.; Hawkins, S.

    2006-09-01

    In this work we develop, implement and analyze a high-order spectrally accurate algorithm for computation of the echo area, and monostatic and bistatic radar cross-section (RCS) of a three dimensional perfectly conducting obstacle through simulation of the time-harmonic electromagnetic waves scattered by the conductor. Our scheme is based on a modified boundary integral formulation (of the Maxwell equations) that is tolerant to basis functions that are not tangential on the conductor surface. We test our algorithm with extensive computational experiments using a variety of three dimensional perfect conductors described in spherical coordinates, including benchmark radar targets such as the metallic NASA almond and ogive. The monostatic RCS measurements for non-convex conductors require hundreds of incident waves (boundary conditions). We demonstrate that the monostatic RCS of small (to medium) sized conductors can be computed using over one thousand incident waves within a few minutes (to a few hours) of CPU time. We compare our results with those obtained using method of moments based industrial standard three dimensional electromagnetic codes CARLOS, CICERO, FE-IE, FERM, and FISC. Finally, we prove the spectrally accurate convergence of our algorithm for computing the surface current, far-field, and RCS values of a class of conductors described globally in spherical coordinates.

  10. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  11. Depth-fused multi-focal plane displays enable accurate depth perception

    NASA Astrophysics Data System (ADS)

    Hua, Hong; Liu, Sheng

    2010-11-01

    Many different approaches to three-dimensional (3-D) displays have been explored, most of which are considered to be stereoscopic-type. The stereoscopic-type displays create depth perception by presenting two perspective images, one for each eye, of a 3D scene from two slightly different viewing positions. They have been the dominant technology adopted for many applications, spanning the fields of flight simulation, scientific visualization, medicine, engineering design, education and training, and entertainment systems. Existing stereoscopic displays, however, lack the ability to produce accurate focus cues, which have been suggested to contribute to various visual artifacts such as visual fatigue. This paper will review some recent work on vari- and multi-focal plane display technologies that are capable of rendering nearly correct focus cues for 3D objects and these technologies have great promise of enabling more accurate depth perception for 3D tasks.

  12. Node Handprinting: A Scalable and Accurate Algorithm for Aligning Multiple Biological Networks.

    PubMed

    Radu, Alex; Charleston, Michael

    2015-07-01

    Due to recent advancements in high-throughput sequencing technologies, progressively more protein-protein interactions have been identified for a growing number of species. Subsequently, the protein-protein interaction networks for these species have been further refined. The increase in the quality and availability of these networks has in turn brought a demand for efficient methods to analyze such networks. The pairwise alignment of these networks has been moderately investigated, with numerous algorithms available, but there is very little progress in the field of multiple network alignment. Multiple alignment of networks from different organisms is ideal at finding abnormally conserved or disparate subnetworks. We present a fast and accurate algorithmic approach, Node Handprinting (NH), based on our previous work with Node Fingerprinting, which enables quick and accurate alignment of multiple networks. We also propose two new metrics for the analysis of multiple alignments, as the current metrics are not as sophisticated as their pairwise alignment counterparts. To assess the performance of NH, we use previously aligned datasets as well as protein interaction networks generated from the public database BioGRID. Our results indicate that NH compares favorably with current methodologies and is the only algorithm capable of performing the more complex alignments. PMID:25695597

  13. An accurate product SVD (singular value decomposition) algorithm

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. . School of Electrical Engineering); Ewerbring, M. ); Van Dooren, P. )

    1990-01-01

    In this paper, we propose a new algorithm for computing a singular value decomposition of a product of three matrices. We show that our algorithm is numerically desirable in that all relevant residual elements will be numerically small. 12 refs., 1 tab.

  14. Enabling accurate gate profile control with inline 3D-AFM

    NASA Astrophysics Data System (ADS)

    Bao, Tianming; Lopez, Andrew; Dawson, Dean

    2009-05-01

    The logic and memory semiconductor device technology strives to follow the aggressive ITRS roadmap. The ITRS calls for increased 3D metrology to meet the demand for tighter process control at 45nm and 32nm nodes. In particular, gate engineering has advanced to a level where conventional metrology by CD-SEM and optical scatterometry (OCD) faces fundamental limitations without involvement of 3D atomic force microscope (3D-AFM or CD-AFM). This paper reports recent progress in 3D-AFM to address the metrology need to control gate dimension in MOSFET transistor formation. 3D-AFM metrology measures the gate electrode at post-etch with the lowest measurement uncertainty for critical gate geometry, including linewidth, sidewall profile, sidewall angle (SWA), line width roughness (LWR), and line edge roughness (LER). 3D-AFM enables accurate gate profile control in three types of metrology applications: reference metrology to validate CD-SEM and OCD, inline depth or 3D monitoring, or replacing TEM for 3D characterization for engineering analysis.

  15. Consistent Multigroup Theory Enabling Accurate Course-Group Simulation of Gen IV Reactors

    SciTech Connect

    Rahnema, Farzad; Haghighat, Alireza; Ougouag, Abderrafi

    2013-11-29

    The objective of this proposal is the development of a consistent multi-group theory that accurately accounts for the energy-angle coupling associated with collapsed-group cross sections. This will allow for coarse-group transport and diffusion theory calculations that exhibit continuous energy accuracy and implicitly treat cross- section resonances. This is of particular importance when considering the highly heterogeneous and optically thin reactor designs within the Next Generation Nuclear Plant (NGNP) framework. In such reactors, ignoring the influence of anisotropy in the angular flux on the collapsed cross section, especially at the interface between core and reflector near which control rods are located, results in inaccurate estimates of the rod worth, a serious safety concern. The scope of this project will include the development and verification of a new multi-group theory enabling high-fidelity transport and diffusion calculations in coarse groups, as well as a methodology for the implementation of this method in existing codes. This will allow for a higher accuracy solution of reactor problems while using fewer groups and will reduce the computational expense. The proposed research represents a fundamental advancement in the understanding and improvement of multi- group theory for reactor analysis.

  16. Accelerating universal Kriging interpolation algorithm using CUDA-enabled GPU

    NASA Astrophysics Data System (ADS)

    Cheng, Tangpei

    2013-04-01

    Kriging algorithms are a group of important interpolation methods, which are very useful in many geological applications. However, the algorithm based on traditional general purpose processors can be computationally expensive, especially when the problem scale expands. Inspired by the current trend in graphics processing technology, we proposed an efficient parallel scheme to accelerate the universal Kriging algorithm on the NVIDIA CUDA platform. Some high-performance mathematical functions have been introduced to calculate the compute-intensive steps in the Kriging algorithm, such as matrix-vector multiplication and matrix-matrix multiplication. To further optimize performance, we reduced the memory transfer overhead by reconstructing the time-consuming loops, specifically for the execution on GPU. In the numerical experiment, we compared the performances among different multi-core CPU and GPU implementations to interpolate a geological site. The improved CUDA implementation shows a nearly 18× speedup with respect to the sequential program and is 6.32 times faster compared to the OpenMP-based version running on Intel Xeon E5320 quad-cores CPU and scales well with the size of the system.

  17. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  18. A cooperative positioning algorithm for DSRC enabled vehicular networks

    NASA Astrophysics Data System (ADS)

    Efatmaneshnik, M.; Kealy, A.; Alam, N.; Dempster, A. G.

    2011-12-01

    Many of the safety related applications that can be facilitated by Dedicated Short Range Communications (DSRC), such as vehicle proximity warnings, automated braking (e.g. at level crossings), speed advisories, pedestrian alerts etc., rely on a robust vehicle positioning capability such as that provided by a Global Navigation Satellite System (GNSS). Vehicles in remote areas, entering tunnels, high rise areas or any high multipath/ weak signal environment will challenge the integrity of GNSS position solutions, and ultimately the safety application it underpins. To address this challenge, this paper presents an innovative application of Cooperative Positioning techniques within vehicular networks. CP refers to any method of integrating measurements from different positioning systems and sensors in order to improve the overall quality (accuracy and reliability) of the final position solution. This paper investigates the potential of the DSRC infrastructure itself to provide an inter-vehicular ranging signal that can be used as a measurement within the CP algorithm. In this paper, time-based techniques of ranging are introduced and bandwidth requirements are investigated and presented. The robustness of the CP algorithm to inter-vehicle connection failure as well as GNSS dropouts is also demonstrated using simulation studies. Finally, the performance of the Constrained Kalman Filter used to integrate GNSS measurements with DSRC derived range estimates within a typical VANET is described and evaluated.

  19. SAR data exploitation: computational technology enabling SAR ATR algorithm development

    NASA Astrophysics Data System (ADS)

    Majumder, Uttam K.; Casteel, Curtis H., Jr.; Buxa, Peter; Minardi, Michael J.; Zelnio, Edmund G.; Nehrbass, John W.

    2007-04-01

    A fundamental issue with synthetic aperture radar (SAR) application development is data processing and exploitation in real-time or near real-time. The power of high performance computing (HPC) clusters, FPGA, and the IBM Cell processor presents new algorithm development possibilities that have not been fully leveraged. In this paper, we will illustrate the capability of SAR data exploitation which was impractical over the last decade due to computing limitations. We can envision that SAR imagery encompassing city size coverage at extremely high levels of fidelity could be processed at near-real time using the above technologies to empower the warfighter with access to critical information for the war on terror, homeland defense, as well as urban warfare.

  20. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues.

    PubMed

    Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan

    2015-02-23

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429

  1. An accurate and efficient algorithm for Peptide and ptm identification by tandem mass spectrometry.

    PubMed

    Ning, Kang; Ng, Hoong Kee; Leong, Hon Wai

    2007-01-01

    Peptide identification by tandem mass spectrometry (MS/MS) is one of the most important problems in proteomics. Recent advances in high throughput MS/MS experiments result in huge amount of spectra. Unfortunately, identification of these spectra is relatively slow, and the accuracies of current algorithms are not high with the presence of noises and post-translational modifications (PTMs). In this paper, we strive to achieve high accuracy and efficiency for peptide identification problem, with special concern on identification of peptides with PTMs. This paper expands our previous work on PepSOM with the introduction of two accurate modified scoring functions: Slambda for peptide identification and Slambda* for identification of peptides with PTMs. Experiments showed that our algorithm is both fast and accurate for peptide identification. Experiments on spectra with simulated and real PTMs confirmed that our algorithm is accurate for identifying PTMs. PMID:18546510

  2. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  3. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments.

    PubMed

    Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  4. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  5. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  6. Retention projection enables accurate calculation of liquid chromatographic retention times across labs and methods.

    PubMed

    Abate-Pella, Daniel; Freund, Dana M; Ma, Yan; Simón-Manso, Yamil; Hollender, Juliane; Broeckling, Corey D; Huhman, David V; Krokhin, Oleg V; Stoll, Dwight R; Hegeman, Adrian D; Kind, Tobias; Fiehn, Oliver; Schymanski, Emma L; Prenni, Jessica E; Sumner, Lloyd W; Boswell, Paul G

    2015-09-18

    Identification of small molecules by liquid chromatography-mass spectrometry (LC-MS) can be greatly improved if the chromatographic retention information is used along with mass spectral information to narrow down the lists of candidates. Linear retention indexing remains the standard for sharing retention data across labs, but it is unreliable because it cannot properly account for differences in the experimental conditions used by various labs, even when the differences are relatively small and unintentional. On the other hand, an approach called "retention projection" properly accounts for many intentional differences in experimental conditions, and when combined with a "back-calculation" methodology described recently, it also accounts for unintentional differences. In this study, the accuracy of this methodology is compared with linear retention indexing across eight different labs. When each lab ran a test mixture under a range of multi-segment gradients and flow rates they selected independently, retention projections averaged 22-fold more accurate for uncharged compounds because they properly accounted for these intentional differences, which were more pronounced in steep gradients. When each lab ran the test mixture under nominally the same conditions, which is the ideal situation to reproduce linear retention indices, retention projections still averaged 2-fold more accurate because they properly accounted for many unintentional differences between the LC systems. To the best of our knowledge, this is the most successful study to date aiming to calculate (or even just to reproduce) LC gradient retention across labs, and it is the only study in which retention was reliably calculated under various multi-segment gradients and flow rates chosen independently by labs. PMID:26292625

  7. Multiplex preamplification PCR and microsatellite validation enables accurate single nucleotide polymorphism genotyping of historical fish scales.

    PubMed

    Smith, Matt J; Pascal, Carita E; Grauvogel, Zac; Habicht, Christopher; Seeb, James E; Seeb, Lisa W

    2011-03-01

    Incorporating historical tissues into the study of ecological, conservation and management questions can broaden the scope of population genetic research by enhancing our understanding of evolutionary processes and anthropogenic influences on natural populations. Genotyping historical and low-quality samples has been plagued by challenges associated with low amounts of template DNA and the potential for pre-existing DNA contamination among samples. We describe a two-step process designed to (i) accurately genotype large numbers of historical low-quality scale samples in a high-throughput format and (ii) screen samples for pre-existing DNA contamination. First, we describe how an efficient multiplex preamplification PCR of 45 single nucleotide polymorphisms (SNPs) can generate highly accurate genotypes with low failure and error rates in subsequent SNP genotyping reactions of individual historical scales from sockeye salmon (Oncorhynchus nerka). Second, we demonstrate how the method can be modified for the amplification of microsatellite loci to detect pre-existing DNA contamination. A total of 760 individual historical scale and 182 contemporary fin clip samples were genotyped and screened for contamination. Genotyping failure and error rates were exceedingly low and similar for both historical and contemporary samples. Pre-existing contamination in 21% of the historical samples was successfully identified by screening the amplified microsatellite loci. The advantages of automation, low failure and error rates, and ability to multiplex both the preamplification and subsequent genotyping reactions combine to make the protocol ideally suited for efficiently genotyping large numbers of potentially contaminated low-quality sources of DNA. PMID:21429180

  8. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  9. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  10. Optical coherence tomography enables accurate measurement of equine cartilage thickness for determination of speed of sound.

    PubMed

    Puhakka, Pia H; Te Moller, Nikae C R; Tanska, Petri; Saarakkala, Simo; Tiitu, Virpi; Korhonen, Rami K; Brommer, Harold; Virén, Tuomas; Jurvelin, Jukka S; Töyräs, Juha

    2016-08-01

    Background and purpose - Arthroscopic estimation of articular cartilage thickness is important for scoring of lesion severity, and measurement of cartilage speed of sound (SOS)-a sensitive index of changes in cartilage composition. We investigated the accuracy of optical coherence tomography (OCT) in measurements of cartilage thickness and determined SOS by combining OCT thickness and ultrasound (US) time-of-flight (TOF) measurements. Material and methods - Cartilage thickness measurements from OCT and microscopy images of 94 equine osteochondral samples were compared. Then, SOS in cartilage was determined using simultaneous OCT thickness and US TOF measurements. SOS was then compared with the compositional, structural, and mechanical properties of cartilage. Results - Measurements of non-calcified cartilage thickness using OCT and microscopy were significantly correlated (ρ = 0.92; p < 0.001). With calcified cartilage included, the correlation was ρ = 0.85 (p < 0.001). The mean cartilage SOS (1,636 m/s) was in agreement with the literature. However, SOS and the other properties of cartilage lacked any statistically significant correlation. Interpretation - OCT can give an accurate measurement of articular cartilage thickness. Although SOS measurements lacked accuracy in thin equine cartilage, the concept of SOS measurement using OCT appears promising. PMID:27164159

  11. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks.

    PubMed

    Zhang, Xiujun; Zhao, Juan; Hao, Jin-Kao; Zhao, Xing-Ming; Chen, Luonan

    2015-03-11

    Mutual information (MI), a quantity describing the nonlinear dependence between two random variables, has been widely used to construct gene regulatory networks (GRNs). Despite its good performance, MI cannot separate the direct regulations from indirect ones among genes. Although the conditional mutual information (CMI) is able to identify the direct regulations, it generally underestimates the regulation strength, i.e. it may result in false negatives when inferring gene regulations. In this work, to overcome the problems, we propose a novel concept, namely conditional mutual inclusive information (CMI2), to describe the regulations between genes. Furthermore, with CMI2, we develop a new approach, namely CMI2NI (CMI2-based network inference), for reverse-engineering GRNs. In CMI2NI, CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback-Leibler divergence between the postulated distributions of including and excluding the edge between the two genes. The benchmark results on the GRNs from DREAM challenge as well as the SOS DNA repair network in Escherichia coli demonstrate the superior performance of CMI2NI. Specifically, even for gene expression data with small sample size, CMI2NI can not only infer the correct topology of the regulation networks but also accurately quantify the regulation strength between genes. As a case study, CMI2NI was also used to reconstruct cancer-specific GRNs using gene expression data from The Cancer Genome Atlas (TCGA). CMI2NI is freely accessible at http://www.comp-sysbio.org/cmi2ni. PMID:25539927

  12. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation

    PubMed Central

    Ralph, Duncan K.; Matsen, Frederick A.

    2016-01-01

    VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM “factorization” strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373

  13. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks

    PubMed Central

    Zhang, Xiujun; Zhao, Juan; Hao, Jin-Kao; Zhao, Xing-Ming; Chen, Luonan

    2015-01-01

    Mutual information (MI), a quantity describing the nonlinear dependence between two random variables, has been widely used to construct gene regulatory networks (GRNs). Despite its good performance, MI cannot separate the direct regulations from indirect ones among genes. Although the conditional mutual information (CMI) is able to identify the direct regulations, it generally underestimates the regulation strength, i.e. it may result in false negatives when inferring gene regulations. In this work, to overcome the problems, we propose a novel concept, namely conditional mutual inclusive information (CMI2), to describe the regulations between genes. Furthermore, with CMI2, we develop a new approach, namely CMI2NI (CMI2-based network inference), for reverse-engineering GRNs. In CMI2NI, CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback–Leibler divergence between the postulated distributions of including and excluding the edge between the two genes. The benchmark results on the GRNs from DREAM challenge as well as the SOS DNA repair network in Escherichia coli demonstrate the superior performance of CMI2NI. Specifically, even for gene expression data with small sample size, CMI2NI can not only infer the correct topology of the regulation networks but also accurately quantify the regulation strength between genes. As a case study, CMI2NI was also used to reconstruct cancer-specific GRNs using gene expression data from The Cancer Genome Atlas (TCGA). CMI2NI is freely accessible at http://www.comp-sysbio.org/cmi2ni. PMID:25539927

  14. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    PubMed

    Ralph, Duncan K; Matsen, Frederick A

    2016-01-01

    VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373

  15. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  16. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-05-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  17. A block-wise approximate parallel implementation for ART algorithm on CUDA-enabled GPU.

    PubMed

    Fan, Zhongyin; Xie, Yaoqin

    2015-01-01

    Computed tomography (CT) has been widely used to acquire volumetric anatomical information in the diagnosis and treatment of illnesses in many clinics. However, the ART algorithm for reconstruction from under-sampled and noisy projection is still time-consuming. It is the goal of our work to improve a block-wise approximate parallel implementation for the ART algorithm on CUDA-enabled GPU to make the ART algorithm applicable to the clinical environment. The resulting method has several compelling features: (1) the rays are allotted into blocks, making the rays in the same block parallel; (2) GPU implementation caters to the actual industrial and medical application demand. We test the algorithm on a digital shepp-logan phantom, and the results indicate that our method is more efficient than the existing CPU implementation. The high computation efficiency achieved in our algorithm makes it possible for clinicians to obtain real-time 3D images. PMID:26405857

  18. Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.

    1994-01-01

    Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.

  19. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  20. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  1. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  2. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    NASA Astrophysics Data System (ADS)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  3. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  4. Combinatorial Algorithms to Enable Computational Science and Engineering: The CSCAPES Institute

    SciTech Connect

    Pothen, Alex

    2015-01-16

    This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hyeprgraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellows have joined DOE Labs (Sandia, Berkeley, as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.

  5. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  6. An accurate 3D inspection system using heterodyne multiple frequency phase-shifting algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Zhenzhong; Chee, Oichoo; Asundi, Anand

    This paper presents an accurate 3D inspection system for industrial applications, which uses digital fringe projection technology. The system consists of two CCD cameras and a DLP projector. The mathematical model of the 3D inspection system with 10 distortion parameters for each camera is proposed. A heterodyne multiple frequency phase-shifting algorithm is employed for overcoming the unwrapping problem of phase functions and for a reliable unwrapping procedure. The redundant phase information is used to increase the accuracy of the 3D reconstruction. To demonstrate the effectiveness of our system, a standard sphere was used for testing. The verification test for the 3D inspection systems are based on the VDI standard 2634. The result shows the proposed system can be used for industrial quality inspection with high measurement precision.

  7. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  8. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    NASA Astrophysics Data System (ADS)

    Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.

    2015-06-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.

  9. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence. PMID:20210600

  10. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  11. Final Progress Report: Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes Feasibility Study

    SciTech Connect

    Rawool-Sullivan, Mohini; Bounds, John Alan; Brumby, Steven P.; Prasad, Lakshman; Sullivan, John P.

    2012-04-30

    This is the final report of the project titled, 'Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes,' PMIS project number LA10-HUMANID-PD03. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). It summarizes work performed over the FY10 time period. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). Human analysts begin analyzing a spectrum based on features in the spectrum - lines and shapes that are present in a given spectrum. The proposed work was to carry out a feasibility study that will pick out all gamma ray peaks and other features such as Compton edges, bremsstrahlung, presence/absence of shielding and presence of neutrons and escape peaks. Ultimately success of this feasibility study will allow us to collectively explain identified features and form a realistic scenario that produced a given spectrum in the future. We wanted to develop and demonstrate machine learning algorithms that will qualitatively enhance the automated identification capabilities of portable radiological sensors that are currently being used in the field.

  12. MOSAIK: a hash-based algorithm for accurate next-generation sequencing short-read mapping.

    PubMed

    Lee, Wan-Ping; Stromberg, Michael P; Ward, Alistair; Stewart, Chip; Garrison, Erik P; Marth, Gabor T

    2014-01-01

    MOSAIK is a stable, sensitive and open-source program for mapping second and third-generation sequencing reads to a reference genome. Uniquely among current mapping tools, MOSAIK can align reads generated by all the major sequencing technologies, including Illumina, Applied Biosystems SOLiD, Roche 454, Ion Torrent and Pacific BioSciences SMRT. Indeed, MOSAIK was the only aligner to provide consistent mappings for all the generated data (sequencing technologies, low-coverage and exome) in the 1000 Genomes Project. To provide highly accurate alignments, MOSAIK employs a hash clustering strategy coupled with the Smith-Waterman algorithm. This method is well-suited to capture mismatches as well as short insertions and deletions. To support the growing interest in larger structural variant (SV) discovery, MOSAIK provides explicit support for handling known-sequence SVs, e.g. mobile element insertions (MEIs) as well as generating outputs tailored to aid in SV discovery. All variant discovery benefits from an accurate description of the read placement confidence. To this end, MOSAIK uses a neural-network based training scheme to provide well-calibrated mapping quality scores, demonstrated by a correlation coefficient between MOSAIK assigned and actual mapping qualities greater than 0.98. In order to ensure that studies of any genome are supported, a training pipeline is provided to ensure optimal mapping quality scores for the genome under investigation. MOSAIK is multi-threaded, open source, and incorporated into our command and pipeline launcher system GKNO (http://gkno.me). PMID:24599324

  13. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  14. Dual-Energy Head CT Enables Accurate Distinction of Intraparenchymal Hemorrhage from Calcification in Emergency Department Patients.

    PubMed

    Hu, Ranliang; Daftari Besheli, Laleh; Young, Joseph; Wu, Markus; Pomerantz, Stuart; Lev, Michael H; Gupta, Rajiv

    2016-07-01

    Purpose To evaluate the ability of dual-energy (DE) computed tomography (CT) to differentiate calcification from acute hemorrhage in the emergency department setting. Materials and Methods In this institutional review board-approved study, all unenhanced DE head CT examinations that were performed in the emergency department in November and December 2014 were retrospectively reviewed. Simulated 120-kVp single-energy CT images were derived from the DE CT acquisition via postprocessing. Patients with at least one focus of intraparenchymal hyperattenuation on single-energy CT images were included, and DE material decomposition postprocessing was performed. Each focal hyperattenuation was analyzed on the basis of the virtual noncalcium and calcium overlay images and classified as calcification or hemorrhage. Sensitivity, specificity, and accuracy were calculated for single-energy and DE CT by using a common reference standard established by relevant prior and follow-up imaging and clinical information. Results Sixty-two cases with 68 distinct intraparenchymal hyperattenuating lesions in which the reference standards were available were included in the study, of which 41 (60%) were confirmed as calcification and 27 (40%) were confirmed as hemorrhage. Sensitivity, specificity, and accuracy of DE CT for the detection of hemorrhage were 96% (95% confidence interval [CI]: 81%, 100%), 100% (95% CI: 91%, 100%), and 99% (95% CI: 92%, 100%) and those of single-energy CT were 74% (95% CI: 54%, 89%), 95% (95% CI: 83%, 99%), and 87% (95% CI: 76%, 94%), respectively. Six of 68 (9%) lesions were classified as indeterminate and three (4%) were misinterpreted with single-energy CT alone and were correctly classified with DE CT. Conclusion DE CT by using material decomposition enables accurate differentiation between calcification and hemorrhage in patients presenting for emergency head imaging and can be especially useful in problem-solving complex cases that are difficult to

  15. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  16. Improved nucleosome-positioning algorithm iNPS for accurate nucleosome positioning from sequencing data.

    PubMed

    Chen, Weizhong; Liu, Yi; Zhu, Shanshan; Green, Christopher D; Wei, Gang; Han, Jing-Dong Jackie

    2014-01-01

    Accurate determination of genome-wide nucleosome positioning can provide important insights into global gene regulation. Here, we describe the development of an improved nucleosome-positioning algorithm-iNPS-which achieves significantly better performance than the widely used NPS package. By determining nucleosome boundaries more precisely and merging or separating shoulder peaks based on local MNase-seq signals, iNPS can unambiguously detect 60% more nucleosomes. The detected nucleosomes display better nucleosome 'widths' and neighbouring centre-centre distance distributions, giving rise to sharper patterns and better phasing of average nucleosome profiles and higher consistency between independent data subsets. In addition to its unique advantage in classifying nucleosomes by shape to reveal their different biological properties, iNPS also achieves higher significance and lower false positive rates than previously published methods. The application of iNPS to T-cell activation data demonstrates a greater ability to facilitate detection of nucleosome repositioning, uncovering additional biological features underlying the activation process. PMID:25233085

  17. Low noise frequency synthesizer with self-calibrated voltage controlled oscillator and accurate AFC algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Qin; Jinbo, Li; Jian, Kang; Xiaoyong, Li; Jianjun, Zhou

    2014-09-01

    A low noise phase locked loop (PLL) frequency synthesizer implemented in 65 nm CMOS technology is introduced. A VCO noise reduction method suited for short channel design is proposed to minimize PLL output phase noise. A self-calibrated voltage controlled oscillator is proposed in cooperation with the automatic frequency calibration circuit, whose accurate binary search algorithm helps reduce the VCO tuning curve coverage, which reduces the VCO noise contribution at PLL output phase noise. A low noise, charge pump is also introduced to extend the tuning voltage range of the proposed VCO, which further reduces its phase noise contribution. The frequency synthesizer generates 9.75-11.5 GHz high frequency wide band local oscillator (LO) carriers. Tested 11.5 GHz LO bears a phase noise of-104 dBc/Hz at 1 MHz frequency offset. The total power dissipation of the proposed frequency synthesizer is 48 mW. The area of the proposed frequency synthesizer is 0.3 mm2, including bias circuits and buffers.

  18. Obtaining More Accurate Signals: Spatiotemporal Imaging of Cancer Sites Enabled by a Photoactivatable Aptamer-Based Strategy.

    PubMed

    Xiao, Heng; Chen, Yuqi; Yuan, Erfeng; Li, Wei; Jiang, Zhuoran; Wei, Lai; Su, Haomiao; Zeng, Weiwu; Gan, Yunjiu; Wang, Zijing; Yuan, Bifeng; Qin, Shanshan; Leng, Xiaohua; Zhou, Xin; Liu, Songmei; Zhou, Xiang

    2016-09-14

    Early cancer diagnosis is of great significance to relative cancer prevention and clinical therapy, and it is crucial to efficiently recognize cancerous tumor sites at the molecular level. Herein, we proposed a versatile and efficient strategy based on aptamer recognition and photoactivation imaging for cancer diagnosis. This is the first time that a visible light-controlled photoactivatable aptamer-based platform has been applied for cancer diagnosis. The photoactivatable aptamer-based strategy can accurately detect nucleolin-overexpressed tumor cells and can be used for highly selective cancer cell screening and tissue imaging. This strategy is available for both formalin-fixed paraffin-embedded tissue specimens and frozen sections. Moreover, the photoactivation techniques showed great progress in more accurate and persistent imaging to the use of traditional fluorophores. Significantly, the application of this strategy can produce the same accurate results in tissue specimen analysis as with classical hematoxylin-eosin staining and immunohistochemical technology. PMID:27550088

  19. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. PMID:23454611

  20. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  1. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  2. LGH: A Fast and Accurate Algorithm for Single Individual Haplotyping Based on a Two-Locus Linkage Graph.

    PubMed

    Xie, Minzhu; Wang, Jianxin; Chen, Xin

    2015-01-01

    Phased haplotype information is crucial in our complete understanding of differences between individuals at the genetic level. Given a collection of DNA fragments sequenced from a homologous pair of chromosomes, the problem of single individual haplotyping (SIH) aims to reconstruct a pair of haplotypes using a computer algorithm. In this paper, we encode the information of aligned DNA fragments into a two-locus linkage graph and approach the SIH problem by vertex labeling of the graph. In order to find a vertex labeling with the minimum sum of weights of incompatible edges, we develop a fast and accurate heuristic algorithm. It starts with detecting error-tolerant components by an adapted breadth-first search. A proper labeling of vertices is then identified for each component, with which sequencing errors are further corrected and edge weights are adjusted accordingly. After contracting each error-tolerant component into a single vertex, the above procedure is iterated on the resulting condensed linkage graph until error-tolerant components are no longer detected. The algorithm finally outputs a haplotype pair based on the vertex labeling. Extensive experiments on simulated and real data show that our algorithm is more accurate and faster than five existing algorithms for single individual haplotyping. PMID:26671798

  3. Spectrally-accurate algorithm for the analysis of flows in two-dimensional vibrating channels

    NASA Astrophysics Data System (ADS)

    Zandi, S.; Mohammadi, A.; Floryan, J. M.

    2015-11-01

    A spectral algorithm based on the immersed boundary conditions (IBC) concept has been developed for the analysis of flows in channels bounded by vibrating walls. The vibrations take the form of travelling waves of arbitrary profile. The algorithm uses a fixed computational domain with the flow domain immersed in its interior. Boundary conditions enter the algorithm in the form of constraints. The spatial discretization uses a Fourier expansion in the stream-wise direction and a Chebyshev expansion in the wall-normal direction. Use of the Galileo transformation converts the unsteady problem into a steady one. An efficient solver which takes advantage of the structure of the coefficient matrix has been used. It is demonstrated that the method can be extended to more extreme geometries using the overdetermined formulation. Various tests confirm the spectral accuracy of the algorithm.

  4. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.

    PubMed

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-14

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used. PMID:27421393

  5. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-01

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  6. A modified hyperplane clustering algorithm allows for efficient and accurate clustering of extremely large datasets

    PubMed Central

    Sharma, Ashok; Podolsky, Robert; Zhao, Jieping; McIndoe, Richard A.

    2009-01-01

    Motivation: As the number of publically available microarray experiments increases, the ability to analyze extremely large datasets across multiple experiments becomes critical. There is a requirement to develop algorithms which are fast and can cluster extremely large datasets without affecting the cluster quality. Clustering is an unsupervised exploratory technique applied to microarray data to find similar data structures or expression patterns. Because of the high input/output costs involved and large distance matrices calculated, most of the algomerative clustering algorithms fail on large datasets (30 000 + genes/200 + arrays). In this article, we propose a new two-stage algorithm which partitions the high-dimensional space associated with microarray data using hyperplanes. The first stage is based on the Balanced Iterative Reducing and Clustering using Hierarchies algorithm with the second stage being a conventional k-means clustering technique. This algorithm has been implemented in a software tool (HPCluster) designed to cluster gene expression data. We compared the clustering results using the two-stage hyperplane algorithm with the conventional k-means algorithm from other available programs. Because, the first stage traverses the data in a single scan, the performance and speed increases substantially. The data reduction accomplished in the first stage of the algorithm reduces the memory requirements allowing us to cluster 44 460 genes without failure and significantly decreases the time to complete when compared with popular k-means programs. The software was written in C# (.NET 1.1). Availability: The program is freely available and can be downloaded from http://www.amdcc.org/bioinformatics/bioinformatics.aspx. Contact: rmcindoe@mail.mcg.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19261720

  7. Experimental study on the application of a compressed-sensing (CS) algorithm to dental cone-beam CT (CBCT) for accurate, low-dose image reconstruction

    NASA Astrophysics Data System (ADS)

    Oh, Jieun; Cho, Hyosung; Je, Uikyu; Lee, Minsik; Kim, Hyojeong; Hong, Daeki; Park, Yeonok; Lee, Seonhwa; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2013-03-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient data. In computed tomography (CT); for example, image reconstruction from few views would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction method based on a compressed-sensing (CS) algorithm, which exploits the sparseness of the gradient image with substantially high accuracy, for accurate, low-dose dental cone-beam CT (CBCT) reconstruction. We applied the algorithm to a commercially-available dental CBCT system (Expert7™, Vatech Co., Korea) and performed experimental works to demonstrate the algorithm for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images from several undersampled data and evaluated the reconstruction quality in terms of the universal-quality index (UQI). Experimental demonstrations of the CS-based reconstruction algorithm appear to show that it can be applied to current dental CBCT systems for reducing imaging doses and improving the image quality.

  8. Hybrid algorithm for extracting accurate tracer position distribution in evanescent wave nano-velocimetry

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Huang, Peter

    2016-02-01

    Evanescent wave nano-velocimetry offers a unique three-dimensional measurement capability that allows for inferring tracer position distribution through the imaged particle intensities. Our previous study suggested that tracer polydispersity and failure to account for a near-wall tracer depletion layer would lead to compromised measurement accuracy. In this work, we report on a hybrid algorithm that converts the measured tracer intensities as a whole into their overall position distribution. The algorithm achieves a superior accuracy by using tracer size variation as a statistical analysis parameter.

  9. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  10. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes

    PubMed Central

    French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-01-01

    Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently

  11. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    PubMed Central

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-01-01

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C68H22 hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K. PMID:26178096

  12. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    SciTech Connect

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-07-14

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C{sub 68}H{sub 22} hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K.

  13. Novel Algorithms Enabling Rapid, Real-Time Earthquake Monitoring and Tsunami Early Warning Worldwide

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Michelini, A.

    2012-12-01

    We have introduced recently new methods to determine rapidly the tsunami potential and magnitude of large earthquakes (e.g., Lomax and Michelini, 2009ab, 2011, 2012). To validate these methods we have implemented them along with other new algorithms within the Early-est earthquake monitor at INGV-Rome (http://early-est.rm.ingv.it, http://early-est.alomax.net). Early-est is a lightweight software package for real-time earthquake monitoring (including phase picking, phase association and event detection, location, magnitude determination, first-motion mechanism determination, ...), and for tsunami early warning based on discriminants for earthquake tsunami potential. In a simulation using archived broadband seismograms for the devastating M9, 2011 Tohoku earthquake and tsunami, Early-est determines: the epicenter within 3 min after the event origin time, discriminants showing very high tsunami potential within 5-7 min, and magnitude Mwpd(RT) 9.0-9.2 and a correct shallow-thrusting mechanism within 8 min. Real-time monitoring with Early-est givess similar results for most large earthquakes using currently available, real-time seismogram data. Here we summarize some of the key algorithms within Early-est that enable rapid, real-time earthquake monitoring and tsunami early warning worldwide: >>> FilterPicker - a general purpose, broad-band, phase detector and picker (http://alomax.net/FilterPicker); >>> Robust, simultaneous association and location using a probabilistic, global-search; >>> Period-duration discriminants TdT0 and TdT50Ex for tsunami potential available within 5 min; >>> Mwpd(RT) magnitude for very large earthquakes available within 10 min; >>> Waveform P polarities determined on broad-band displacement traces, focal mechanisms obtained with the HASH program (Hardebeck and Shearer, 2002); >>> SeisGramWeb - a portable-device ready seismogram viewer using web-services in a browser (http://alomax.net/webtools/sgweb/info.html). References (see also: http

  14. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaginga)

    PubMed Central

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B.; Jia, Xun

    2014-01-01

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  15. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  16. A novel approach for accurate identification of splice junctions based on hybrid algorithms.

    PubMed

    Mandal, Indrajit

    2015-01-01

    The precise prediction of splice junctions as 'exon-intron' or 'intron-exon' boundaries in a given DNA sequence is an important task in Bioinformatics. The main challenge is to determine the splice sites in the coding region. Due to the intrinsic complexity and the uncertainty in gene sequence, the adoption of data mining methods is increasingly becoming popular. There are various methods developed on different strategies in this direction. This article focuses on the construction of new hybrid machine learning ensembles that solve the splice junction task more effectively. A novel supervised feature reduction technique is developed using entropy-based fuzzy rough set theory optimized by greedy hill-climbing algorithm. The average prediction accuracy achieved is above 98% with 95% confidence interval. The performance of the proposed methods is evaluated using various metrics to establish the statistical significance of the results. The experiments are conducted using various schemes with human DNA sequence data. The obtained results are highly promising as compared with the state-of-the-art approaches in literature. PMID:25203504

  17. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  18. An accurate and efficient algorithm for Faraday rotation corrections for spaceborne microwave radiometers

    NASA Astrophysics Data System (ADS)

    Singh, Malkiat; Bettenhausen, Michael H.

    2011-08-01

    Faraday rotation changes the polarization plane of linearly polarized microwaves which propagate through the ionosphere. To correct for ionospheric polarization error, it is necessary to have electron density profiles on a global scale that represent the ionosphere in real time. We use raytrace through the combined models of ionospheric conductivity and electron density (ICED), Bent, and Gallagher models (RIBG model) to specify the ionospheric conditions by ingesting the GPS data from observing stations that are as close as possible to the observation time and location of the space system for which the corrections are required. To accurately calculate Faraday rotation corrections, we also utilize the raytrace utility of the RIBG model instead of the normal shell model assumption for the ionosphere. We use WindSat data, which exhibits a wide range of orientations of the raypath and a high data rate of observations, to provide a realistic data set for analysis. The standard single-shell models at 350 and 400 km are studied along with a new three-shell model and compared with the raytrace method for computation time and accuracy. We have compared the Faraday results obtained with climatological (International Reference Ionosphere and RIBG) and physics-based (Global Assimilation of Ionospheric Measurements) ionospheric models. We also study the impact of limitations in the availability of GPS data on the accuracy of the Faraday rotation calculations.

  19. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    PubMed Central

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.

    2016-01-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238

  20. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    PubMed

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-01-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238

  1. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.

    2016-03-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  2. Petascale Orbital-Free Density Functional Theory Enabled by Small-Box Algorithms.

    PubMed

    Chen, Mohan; Jiang, Xiang-Wei; Zhuang, Houlong; Wang, Lin-Wang; Carter, Emily A

    2016-06-14

    Orbital-free density functional theory (OFDFT) is a quantum-mechanics-based method that utilizes electron density as its sole variable. The main computational cost in OFDFT is the ubiquitous use of the fast Fourier transform (FFT), which is mainly adopted to evaluate the kinetic energy density functional (KEDF) and electron-electron Coulomb interaction terms. We design and implement a small-box FFT (SBFFT) algorithm to overcome the parallelization limitations of conventional FFT algorithms. We also propose real-space truncation of the nonlocal Wang-Teter KEDF kernel. The scalability of the SBFFT is demonstrated by efficiently simulating one full optimization step (electron density, energies, forces, and stresses) of 1,024,000 lithium (Li) atoms on up to 65,536 cores. We perform other tests using Li as a test material, including calculations of physical properties of different phases of bulk Li, geometry optimizations of nanocrystalline Li, and molecular dynamics simulations of liquid Li. All of the tests yield excellent agreement with the original OFDFT results, suggesting that the OFDFT-SBFFT algorithm opens the door to efficient first-principles simulations of materials containing millions of atoms. PMID:27145175

  3. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  4. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    PubMed Central

    Huang, Peng; Andersson, Sean B.

    2014-01-01

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides. PMID:24985865

  5. Fast and accurate analysis of large-scale composite structures with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür; Gürel, Levent

    2013-03-01

    Accurate electromagnetic modeling of complicated optical structures poses several challenges. Optical metamaterial and plasmonic structures are composed of multiple coexisting dielectric and/or conducting parts. Such composite structures may possess diverse values of conductivities and dielectric constants, including negative permittivity and permeability. Further challenges are the large sizes of the structures with respect to wavelength and the complexities of the geometries. In order to overcome these challenges and to achieve rigorous and efficient electromagnetic modeling of three-dimensional optical composite structures, we have developed a parallel implementation of the multilevel fast multipole algorithm (MLFMA). Precise formulation of composite structures is achieved with the so-called "electric and magnetic current combined-field integral equation." Surface integral equations are carefully discretized with piecewise linear basis functions, and the ensuing dense matrix equations are solved iteratively with parallel MLFMA. The hierarchical strategy is used for the efficient parallelization of MLFMA on distributed-memory architectures. In this paper, fast and accurate solutions of large-scale canonical and complicated real-life problems, such as optical metamaterials, discretized with tens of millions of unknowns are presented in order to demonstrate the capabilities of the proposed electromagnetic solver. PMID:23456127

  6. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net . PMID:20426693

  7. A genetic algorithm enabled ensemble for unsupervised medical term extraction from clinical letters.

    PubMed

    Liu, Wei; Chung, Bo Chuen; Wang, Rui; Ng, Jonathon; Morlet, Nigel

    2015-01-01

    Despite the rapid global movement towards electronic health records, clinical letters written in unstructured natural languages are still the preferred form of inter-practitioner communication about patients. These letters, when archived over a long period of time, provide invaluable longitudinal clinical details on individual and populations of patients. In this paper we present three unsupervised approaches, sequential pattern mining (PrefixSpan); frequency linguistic based C-Value; and keyphrase extraction from co-occurrence graphs (TextRank), to automatically extract single and multi-word medical terms without domain-specific knowledge. Because each of the three approaches focuses on different aspects of the language feature space, we propose a genetic algorithm to learn the best parameters of linearly integrating the three extractors for optimal performance against domain expert annotations. Around 30,000 clinical letters sent over the past decade from ophthalmology specialists to general practitioners at an eye clinic are anonymised as the corpus to evaluate the effectiveness of the ensemble against individual extractors. With minimal annotation, the ensemble achieves an average F-measure of 65.65 % when considering only complex medical terms, and a F-measure of 72.47 % if we take single word terms (i.e. unigrams) into consideration, markedly better than the three term extraction techniques when used alone. PMID:26664724

  8. Fast MS/MS acquisition without dynamic exclusion enables precise and accurate quantification of proteome by MS/MS fragment intensity

    PubMed Central

    Zhang, Shen; Wu, Qi; Shan, Yichu; Zhao, Qun; Zhao, Baofeng; Weng, Yejing; Sui, Zhigang; Zhang, Lihua; Zhang, Yukui

    2016-01-01

    Most currently proteomic studies use data-dependent acquisition with dynamic exclusion to identify and quantify the peptides generated by the digestion of biological sample. Although dynamic exclusion permits more identifications and higher possibility to find low abundant proteins, stochastic and irreproducible precursor ion selection caused by dynamic exclusion limit the quantification capabilities, especially for MS/MS based quantification. This is because a peptide is usually triggered for fragmentation only once due to dynamic exclusion. Therefore the fragment ions used for quantification only reflect the peptide abundances at that given time point. Here, we propose a strategy of fast MS/MS acquisition without dynamic exclusion to enable precise and accurate quantification of proteome by MS/MS fragment intensity. The results showed comparable proteome identification efficiency compared to the traditional data-dependent acquisition with dynamic exclusion, better quantitative accuracy and reproducibility regardless of label-free based quantification or isobaric labeling based quantification. It provides us with new insights to fully explore the potential of modern mass spectrometers. This strategy was applied to the relative quantification of two human disease cell lines, showing great promises for quantitative proteomic applications. PMID:27198003

  9. SU-E-J-23: An Accurate Algorithm to Match Imperfectly Matched Images for Lung Tumor Detection Without Markers

    SciTech Connect

    Rozario, T; Bereg, S; Chiu, T; Liu, H; Kearney, V; Jiang, L; Mao, W

    2014-06-01

    Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix of small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.

  10. EASY-GOING deconvolution: Combining accurate simulation and evolutionary algorithms for fast deconvolution of solid-state quadrupolar NMR spectra

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.

    2011-08-01

    A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.

  11. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service

    PubMed Central

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  12. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    PubMed

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  13. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    NASA Astrophysics Data System (ADS)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering

  14. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection.

    PubMed

    Bechet, P; Mitran, R; Munteanu, M

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact. PMID:24007088

  15. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  16. Photometric selection of quasars in large astronomical data sets with a fast and accurate machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2014-03-01

    Future astronomical surveys will produce data on ˜108 objects per night. In order to characterize and classify these sources, we will require algorithms that scale linearly with the size of the data, that can be easily parallelized and where the speedup of the parallel algorithm will be linear in the number of processing cores. In this paper, we present such an algorithm and apply it to the question of colour selection of quasars. We use non-parametric Bayesian classification and a binning algorithm implemented with hash tables (BASH tables). We show that this algorithm's run time scales linearly with the number of test set objects and is independent of the number of training set objects. We also show that it has the same classification accuracy as other algorithms. For current data set sizes, it is up to three orders of magnitude faster than commonly used naive kernel-density-estimation techniques and it is estimated to be about eight times faster than the current fastest algorithm using dual kd-trees for kernel density estimation. The BASH table algorithm scales linearly with the size of the test set data only, and so for future larger data sets, it will be even faster compared to other algorithms which all depend on the size of the test set and the size of the training set. Since it uses linear data structures, it is easier to parallelize compared to tree-based algorithms and its speedup is linear in the number of cores unlike tree-based algorithms whose speedup plateaus after a certain number of cores. Moreover, due to the use of hash tables to implement the binning, the memory usage is very small. While our analysis is for the specific problem of selection of quasars, the ideas are general and the BASH table algorithm can be applied to any density-estimation problem involving sparse high-dimensional data sets. Since sparse high-dimensional data sets are a common type of scientific data set, this method has the potential to be useful in a broad range of

  17. An accurate radiative heating and cooling algorithm for use in a dynamical model of the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1982-01-01

    The circulation of the middle atmosphere of the earth (15-90 km) is driven by the unequal distribution of net radiative heating. Calculations have shown that local radiative heating is nearly balanced by radiative cooling throughout parts of the stratosphere and mesosphere. The 15 micrometer band of CO2 is the dominant component of the infrared cooling. The present investigation is concerned with an algorithm regarding the involved cooling process. The algorithm was designed for the semispectral primitive equation model of the stratosphere and mesosphere described by Holton and Wehrbein (1980). The model consists of 16 layers, each nominally 5 km thick, between the base of the stratosphere at 100 mb (approximately 16 km) and the base of the thermosphere (approximately 96 km). The considered algorithm provides a convenient means of incorporating cooling due to CO2 into dynamical models of the middle atmosphere.

  18. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    PubMed

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-01

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  19. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10-4 Ha/Bohr.

  20. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    SciTech Connect

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  1. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    PubMed

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  2. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis

    PubMed Central

    Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  3. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when

  4. The double-helix point spread function enables precise and accurate measurement of 3D single-molecule localization and orientation

    PubMed Central

    Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.

    2014-01-01

    Single-molecule-based super-resolution fluorescence microscopy has recently been developed to surpass the diffraction limit by roughly an order of magnitude. These methods depend on the ability to precisely and accurately measure the position of a single-molecule emitter, typically by fitting its emission pattern to a symmetric estimator (e.g. centroid or 2D Gaussian). However, single-molecule emission patterns are not isotropic, and depend highly on the orientation of the molecule’s transition dipole moment, as well as its z-position. Failure to account for this fact can result in localization errors on the order of tens of nm for in-focus images, and ~50–200 nm for molecules at modest defocus. The latter range becomes especially important for three-dimensional (3D) single-molecule super-resolution techniques, which typically employ depths-of-field of up to ~2 μm. To address this issue we report the simultaneous measurement of precise and accurate 3D single-molecule position and 3D dipole orientation using the Double-Helix Point Spread Function (DH-PSF) microscope. We are thus able to significantly improve dipole-induced position errors, reducing standard deviations in lateral localization from ~2x worse than photon-limited precision (48 nm vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation we are able to improve from a lateral standard deviation of 116 nm (~4x worse than the precision, 28 nm) to 34 nm (within 6 nm). PMID:24817798

  5. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    PubMed

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  6. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients

    PubMed Central

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0–F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  7. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients.

    PubMed

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0-F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  8. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Stable, high-order SBP-SAT finite difference operators to enable accurate simulation of compressible turbulent flows on curvilinear grids, with application to predicting turbulent jet noise

    NASA Astrophysics Data System (ADS)

    Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos

    2014-11-01

    Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.

  10. Robust Algorithm for Alignment of Liquid Chromatography-Mass Spectrometry Analyses in an Accurate Mass and Time Tag Data Analysis Pipeline

    SciTech Connect

    Jaitly, Navdeep; Monroe, Matthew E.; Petyuk, Vladislav A.; Clauss, Therese RW; Adkins, Joshua N.; Smith, Richard D.

    2006-11-01

    Liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) has become a standard technique for analyzing complex peptide mixtures to determine composition and relative quantity. Several high-throughput proteomics techniques attempt to combine complementary results from multiple LC-MS and LC-MS/MS analyses to provide more comprehensive and accurate results. To effectively collate results from these techniques, variations in mass and elution time measurements between related analyses are corrected by using algorithms designed to align the various types of results: LC-MS/MS vs. LC-MS/MS, LC-MS vs. LC-MS/MS, and LC-MS vs. LC-MS. Described herein are new algorithms referred to collectively as Liquid Chromatography based Mass Spectrometric Warping and Alignment of Retention times of Peptides (LCMSWARP) which use a dynamic elution time warping approach similar to traditional algorithms that correct variation in elution time using piecewise linear functions. LCMSWARP is compared to a linear alignment algorithm that assumes a linear transformation of elution time between analyses. LCMSWARP also corrects for drift in mass measurement accuracies that are often seen in an LC-MS analysis due to factors such as analyzer drift. We also describe the alignment of LC-MS results and provide examples of alignment of analyses from different chromatographic systems to demonstrate more complex transformation functions.

  11. Toward accurate volumetry of brain aneurysms: combination of an algorithm for automatic thresholding with a 3D eraser tool.

    PubMed

    Costalat, Vincent; Maldonado, Igor Lima; Strauss, Olivier; Bonafé, Alain

    2011-06-15

    The present study describes a new approach for aneurysm volume quantification on three-dimensional angiograms, which focuses on solving three common technical problems: the variability associated with the use of manual thresholds, the irregular morphology of some aneurysms, and the imprecision of the limits between the parent artery and the aneurysm sac. The method consists of combining an algorithm for automatic threshold determination with a spherical eraser tool that allows the user to separate the image of the aneurysm from the parent artery. The accuracy of volumetry after automatic thresholding was verified with an in vitro experiment in which 57 measurements were performed using four artificial aneurysms of known volume. The reliability of the method was compared to that obtained with the technique of ellipsoid approximation in a clinical setting of 15 real angiograms and 150 measurements performed by five different users. The mean error in the measurement of the artificial aneurysms was 7.23%. The reliability of the new approach was significantly higher than that of the ellipsoid approximation. Limits of agreement between two measurements were determined with Bland-Altman plots and ranged from -14 to 13% for complex and from -10.8 to 11.03% for simple-shaped sacs. The reproducibility was lower (>20% of variation) for small aneurysms (<70 mm³) and for those presenting a very wide neck (dome-to-neck ratio<1). The method is potentially useful in the clinical practice, since it provides relatively precise, reproducible, volume quantification. A safety coiling volume can be established in order to perform sufficient but not excessive filling of the aneurysm pouch. PMID:21540054

  12. Predicting outbreak detection in public health surveillance: quantitative analysis to enable evidence-based method selection.

    PubMed

    Buckeridge, David L; Okhmatovskaia, Anna; Tu, Samson; O'Connor, Martin; Nyulas, Csongor; Musen, Mark A

    2008-01-01

    Public health surveillance is critical for accurate and timely outbreak detection and effective epidemic control. A wide range of statistical algorithms is used for surveillance, and important differences have been noted in the ability of these algorithms to detect outbreaks. The evidence about the relative performance of these algorithms, however, remains limited and mainly qualitative. Using simulated outbreak data, we developed and validated quantitative models for predicting the ability of commonly used surveillance algorithms to detect different types of outbreaks. The developed models accurately predict the ability of different algorithms to detect different types of outbreaks. These models enable evidence-based algorithm selection and can guide research into algorithm development. PMID:18999264

  13. Predicting Outbreak Detection in Public Health Surveillance: Quantitative Analysis to Enable Evidence-Based Method Selection

    PubMed Central

    Buckeridge, David L.; Okhmatovskaia, Anna; Tu, Samson; O’Connor, Martin; Nyulas, Csongor; Musen, Mark A.

    2008-01-01

    Public health surveillance is critical for accurate and timely outbreak detection and effective epidemic control. A wide range of statistical algorithms is used for surveillance, and important differences have been noted in the ability of these algorithms to detect outbreaks. The evidence about the relative performance of these algorithms, however, remains limited and mainly qualitative. Using simulated outbreak data, we developed and validated quantitative models for predicting the ability of commonly used surveillance algorithms to detect different types of outbreaks. The developed models accurately predict the ability of different algorithms to detect different types of outbreaks. These models enable evidence-based algorithm selection and can guide research into algorithm development. PMID:18999264

  14. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome.

    PubMed

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35-39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  15. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome

    PubMed Central

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35–39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  16. Algorithm-enabled exploration of image-quality potential of cone-beam CT in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan

    2015-06-01

    Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.

  17. Scaling to 150K cores: recent algorithm and performance engineering developments enabling XGC1 to run at scale

    SciTech Connect

    Mark F. Adams; Seung-Hoe Ku; Patrick Worley; Ed D'Azevedo; Julian C. Cummings; C.S. Chang

    2009-10-01

    Particle-in-cell (PIC) methods have proven to be eft#11;ective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the fi#12;rst fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes fi#12;eld-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-#12;field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being effe#14;ciently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.

  18. Helicopter Based Magnetic Detection Of Wells At The Teapot Dome (Naval Petroleum Reserve No. 3 Oilfield: Rapid And Accurate Geophysical Algorithms For Locating Wells

    NASA Astrophysics Data System (ADS)

    Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.

    2011-12-01

    In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.

  19. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  20. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  1. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-07-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  2. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  3. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  5. The CUPIC algorithm: an accurate model for the prediction of sustained viral response under telaprevir or boceprevir triple therapy in cirrhotic patients.

    PubMed

    Boursier, J; Ducancelle, A; Vergniol, J; Veillon, P; Moal, V; Dufour, C; Bronowicki, J-P; Larrey, D; Hézode, C; Zoulim, F; Fontaine, H; Canva, V; Poynard, T; Allam, S; De Lédinghen, V

    2015-12-01

    Triple therapy using boceprevir or telaprevir remains the reference treatment for genotype 1 chronic hepatitis C in countries where new interferon-free regimens have not yet become available. Antiviral treatment is highly required in cirrhotic patients, but they represent a difficult-to-treat population. We aimed to develop a simple algorithm for the prediction of sustained viral response (SVR) in cirrhotic patients treated with triple therapy. A total of 484 cirrhotic patients from the ANRS CO20 CUPIC cohort treated with triple therapy were randomly distributed into derivation and validation sets. A total of 52.1% of patients achieved SVR. In the derivation set, a D0 score for the prediction of SVR before treatment initiation included the following independent predictors collected at day 0: prior treatment response, gamma-GT, platelets, telaprevir treatment, viral load. To refine the prediction at the early phase of the treatment, a W4 score included as additional parameter the viral load collected at week 4. The D0 and W4 scores were combined in the CUPIC algorithm defining three subgroups: 'no treatment initiation or early stop at week 4', 'undetermined' and 'SVR highly probable'. In the validation set, the rates of SVR in these three subgroups were, respectively, 11.1%, 50.0% and 82.2% (P < 0.001). By replacing the variable 'prior treatment response' with 'IL28B genotype', another algorithm was derived for treatment-naïve patients with similar results. The CUPIC algorithm is an easy-to-use tool that helps physicians weigh their decision between immediately treating cirrhotic patients using boceprevir/telaprevir triple therapy or waiting for new drugs to become available in their country. PMID:26216230

  6. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  7. Discovery of new genes and deletion editing in Physarum mitochondria enabled by a novel algorithm for finding edited mRNAs

    PubMed Central

    Gott, Jonatha M.; Parimi, Neeta; Bundschuh, Ralf

    2005-01-01

    Gene finding is complicated in organisms that exhibit insertional RNA editing. Here, we demonstrate how our new algorithm Predictor of Insertional Editing (PIE) can be used to locate genes whose mRNAs are subjected to multiple frameshifting events, and extend the algorithm to include probabilistic predictions for sites of nucleotide insertion; this feature is particularly useful when designing primers for sequencing edited RNAs. Applying this algorithm, we successfully identified the nad2, nad4L, nad6 and atp8 genes within the mitochondrial genome of Physarum polycephalum, which had gone undetected by existing programs. Characterization of their mRNA products led to the unanticipated discovery of nucleotide deletion editing in Physarum. The deletion event, which results in the removal of three adjacent A residues, was confirmed by primer extension sequencing of total RNA. This finding is remarkable in that it comprises the first known instance of nucleotide deletion in this organelle, to be contrasted with nearly 500 sites of single and dinucleotide addition in characterized mitochondrial RNAs. Statistical analysis of this larger pool of editing sites indicates that there are significant biases in the 2 nt immediately upstream of editing sites, including a reduced incidence of nucleotide repeats, in addition to the previously identified purine-U bias. PMID:16147990

  8. A caGRID-ENABLED, LEARNING BASED IMAGE SEGMENTATION METHOD FOR HISTOPATHOLOGY SPECIMENS

    PubMed Central

    Foran, David J.; Yang, Lin; Tuzel, Oncel; Chen, Wenjin; Hu, Jun; Kurc, Tahsin M.; Ferreira, Renato; Saltz, Joel H.

    2009-01-01

    Accurate segmentation of tissue microarrays is a challenging topic because of some of the similarities exhibited by normal tissue and tumor regions. Processing speed is another consideration when dealing with imaged tissue microarrays as each microscopic slide may contain hundreds of digitized tissue discs. In this paper, a fast and accurate image segmentation algorithm is presented. Both a whole disc delineation algorithm and a learning based tumor region segmentation approach which utilizes multiple scale texton histograms are introduced. The algorithm is completely automatic and computationally efficient. The mean pixel-wise segmentation accuracy is about 90%. It requires about 1 second for whole disc (1024×1024 pixels) segmentation and less than 5 seconds for segmenting tumor regions. In order to enable remote access to the algorithm and collaborative studies, an analytical service is implemented using the caGrid infrastructure. This service wraps the algorithm and provides interfaces for remote clients to submit images for analysis and retrieve analysis results. PMID:19936299

  9. Apparatus enables accurate determination of alkali oxides in alkali metals

    NASA Technical Reports Server (NTRS)

    Dupraw, W. A.; Gahn, R. F.; Graab, J. W.; Maple, W. E.; Rosenblum, L.

    1966-01-01

    Evacuated apparatus determines the alkali oxide content of an alkali metal by separating the metal from the oxide by amalgamation with mercury. The apparatus prevents oxygen and moisture from inadvertently entering the system during the sampling and analytical procedure.

  10. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  11. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data

    PubMed Central

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  12. Tools for Accurate and Efficient Analysis of Complex Evolutionary Mechanisms in Microbial Genomes. Final Report

    SciTech Connect

    Nakhleh, Luay

    2014-03-12

    I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbial genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.

  13. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    SciTech Connect

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N{sup 2}) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N{sup 2}) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN{sup 2}) work and O(MN{sup 2}) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  16. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-01

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N2) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N2) Sherman-Morrison algorithm for up to O(N ) updates. For multideterminant configuration-interaction-type trial wave functions of M +1 determinants, the new algorithm is significantly more efficient, saving both O(MN2) work and O(MN2) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  17. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    PubMed

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions. PMID:19485435

  18. A Fast and efficient Algorithm for Slater Determinant Updates in Quantum Monte Carlo Simulations

    SciTech Connect

    Nukala, Phani K; Kent, Paul R

    2009-01-01

    We present an efficient low-rank updating algorithm for updating the trial wavefunctions used in Quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is $\\mathcal{O}(k N)$ during the $k$-th step compared with traditional algorithms that require $\\mathcal{O}(N^2)$ computations, where $N$ is the system size. For single determinant trial wavefunctions the new algorithm is faster than the traditional $\\mathcal{O}(N^2)$ Sherman-Morrison algorithm for up to $\\mathcal{O}(N)$ updates. For multideterminant configuration-interaction type trial wavefunctions of $M+1$ determinants, the new algorithm is significantly more efficient, saving both $\\mathcal{O}(MN^2)$ work and $\\mathcal{O}(MN^2)$ storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration interaction type wavefunctions.

  19. DeconMSn: A Software Tool for accurate parent ion monoisotopic mass determination for tandem mass spectra

    SciTech Connect

    Mayampurath, Anoop M.; Jaitly, Navdeep; Purvine, Samuel O.; Monroe, Matthew E.; Auberry, Kenneth J.; Adkins, Joshua N.; Smith, Richard D.

    2008-04-01

    We present a new software tool for tandem MS analyses that: • accurately calculates the monoisotopic mass and charge of high–resolution parent ions • accurately operates regardless of the mass selected for fragmentation • performs independent of instrument settings • enables optimal selection of search mass tolerance for high mass accuracy experiments • is open source and thus can be tailored to individual needs • incorporates a SVM-based charge detection algorithm for analyzing low resolution tandem MS spectra • creates multiple output data formats (.dta, .MGF) • handles .RAW files and .mzXML formats • compatible with SEQUEST, MASCOT, X!Tandem

  20. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases. PMID:27510446

  1. Clinical application of a novel automatic algorithm for actigraphy-based activity and rest period identification to accurately determine awake and asleep ambulatory blood pressure parameters and cardiovascular risk.

    PubMed

    Crespo, Cristina; Fernández, José R; Aboy, Mateo; Mojón, Artemio

    2013-03-01

    This paper reports the results of a study designed to determine whether there are statistically significant differences between the values of ambulatory blood pressure monitoring (ABPM) parameters obtained using different methods-fixed schedule, diary, and automatic algorithm based on actigraphy-of defining the main activity and rest periods, and to determine the clinical relevance of such differences. We studied 233 patients (98 men/135 women), 61.29 ± .83 yrs of age (mean ± SD). Statistical methods were used to measure agreement in the diagnosis and classification of subjects within the context of ABPM and cardiovascular disease risk assessment. The results show that there are statistically significant differences both at the group and individual levels. Those at the individual level have clinically significant implications, as they can result in a different classification, and, therefore, different diagnosis and treatment for individual subjects. The use of an automatic algorithm based on actigraphy can lead to better individual treatment by correcting the accuracy problems associated with the fixed schedule on patients whose actual activity/rest routine differs from the fixed schedule assumed, and it also overcomes the limitations and reliability issues associated with the use of diaries. PMID:23130607

  2. Noncontact accurate measurement of cardiopulmonary activity using a compact quadrature Doppler radar sensor.

    PubMed

    Hu, Wei; Zhao, Zhangyan; Wang, Yunfeng; Zhang, Haiying; Lin, Fujiang

    2014-03-01

    The designed sensor enables accurate reconstruction of chest-wall movement caused by cardiopulmonary activities, and the algorithm enables estimation of respiration, heartbeat rate, and some indicators of heart rate variability (HRV). In particular, quadrature receiver and arctangent demodulation with calibration are introduced for high linearity representation of chest displacement; 24-bit ADCs with oversampling are adopted for radar baseband acquisition to achieve a high signal resolution; continuous-wavelet filter and ensemble empirical mode decomposition (EEMD) based algorithm are applied for cardio/pulmonary signal recovery and separation so that accurate beat-to-beat interval can be acquired in time domain for HRV analysis. In addition, the wireless sensor is realized and integrated on a printed circuit board compactly. The developed sensor system is successfully tested on both simulated target and human subjects. In simulated target experiments, the baseband signal-to-noise ratio (SNR) is 73.27 dB, high enough for heartbeat detection. The demodulated signal has 0.35% mean squared error, indicating high demodulation linearity. In human subject experiments, the relative error of extracted beat-to-beat intervals ranges from 2.53% to 4.83% compared with electrocardiography (ECG) R-R peak intervals. The sensor provides an accurate analysis for heart rate with the accuracy of 100% for p = 2% and higher than 97% for p = 1%. PMID:24235293

  3. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  4. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  5. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    SciTech Connect

    Puckett, Elbridge Gerry; Miller, Gregory Hale

    2012-10-14

    Much of the work conducted under the auspices of DE-FG02-03ER25579 was characterized by an exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). For example, Andy Nonaka, one of Professor Miller's graduate students in the Department of Applied Science at U. C. Davis (UCD) wrote his PhD thesis in an area of interest to researchers in the Applied Numerical Algorithms Group (ANAG), which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. Dr. Nonaka collaborated closely with these researchers and subsequently published the results of this collaboration jointly with them, one article in a peer reviewed journal article and one paper in the proceedings of a conference. Dr. Nonaka is now a research scientist in the Center for Computational Sciences and Engineering (CCSE), which is also part of the National Energy Research Supercomputer Center (NERSC) at LBNL. This collaboration with researchers at LBNL also included having one of Professor Puckett's graduate students in the Graduate Group in Applied Mathematics (GGAM) at UCD, Sarah Williams, spend the summer working with Dr. Ann Almgren, who is a staff scientist in CCSE. As a result of this visit Sarah decided work on a problem suggested by the head of CCSE, Dr. John Bell, for her PhD thesis. Having finished all of the coursework and examinations required for a PhD, Sarah stayed at LBNL to work on her thesis under the guidance of Dr. Bell. Sarah finished her PhD thesis in June of 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is long established tradition at UC and Professor Puckett has always encouraged his students to consider doing this. Another one of Professor Puckett's graduate students in the GGAM at UCD, Christopher Algieri, was partially supported with funds from DE-FG02-03ER25579 while he wrote his MS thesis in which he analyzed and extended work originally published by Dr

  6. Technology Enabled Learning. Symposium.

    ERIC Educational Resources Information Center

    2002

    This document contains three papers on technology-enabled learning and human resource development. Among results found in "Current State of Technology-enabled Learning Programs in Select Federal Government Organizations: a Case Study of Ten Organizations" (Letitia A. Combs) are the following: the dominant delivery method is traditional…

  7. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  8. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  9. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  10. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  11. Dust control for Enabler

    NASA Technical Reports Server (NTRS)

    Hilton, Kevin; Karl, Chad; Litherland, Mark; Ritchie, David; Sun, Nancy

    1992-01-01

    The dust control group designed a system to restrict dust that is disturbed by the Enabler during its operation from interfering with astronaut or camera visibility. This design also considers the many different wheel positions made possible through the use of artinuation joints that provide the steering and wheel pitching for the Enabler. The system uses a combination of brushes and fenders to restrict the dust when the vehicle is moving in either direction and in a turn. This design also allows for each of maintenance as well as accessibility of the remainder of the vehicle.

  12. Dust control for Enabler

    NASA Technical Reports Server (NTRS)

    Hilton, Kevin; Karl, Chad; Litherland, Mark; Ritchie, David; Sun, Nancy

    1992-01-01

    The dust control group designed a system to restrict dust that is disturbed by the Enabler during its operation from interfering with astronaut or camera visibility. This design also considers the many different wheel positions made possible through the use of artinuation joints that provide the steering and wheel pitching for the Enabler. The system uses a combination of brushes and fenders to restrict the dust when the vehicle is moving in either direction and in a turn. This design also allows for ease of maintenance as well as accessibility of the remainder of the vehicle.

  13. Microsystems Enabled Photovoltaics

    ScienceCinema

    Gupta, Vipin; Nielson, Greg; Okandan, Murat, Granata, Jennifer; Nelson, Jeff; Haney, Mike; Cruz-Campa, Jose Luiz

    2014-06-23

    Sandia's microsystems enabled photovoltaic advances combine mature technology and tools currently used in microsystem production with groundbreaking advances in photovoltaics cell design, decreasing production and system costs while improving energy conversion efficiency. The technology has potential applications in buildings, houses, clothing, portable electronics, vehicles, and other contoured structures.

  14. Microsystems Enabled Photovoltaics

    SciTech Connect

    Gupta, Vipin; Nielson, Greg; Okandan, Murat, Granata, Jennifer; Nelson, Jeff; Haney, Mike; Cruz-Campa, Jose Luiz

    2012-07-02

    Sandia's microsystems enabled photovoltaic advances combine mature technology and tools currently used in microsystem production with groundbreaking advances in photovoltaics cell design, decreasing production and system costs while improving energy conversion efficiency. The technology has potential applications in buildings, houses, clothing, portable electronics, vehicles, and other contoured structures.

  15. An Accurate Scalable Template-based Alignment Algorithm.

    PubMed

    Gardner, David P; Xu, Weijia; Miranker, Daniel P; Ozer, Stuart; Cannone, Jamie J; Gutell, Robin R

    2012-12-31

    The rapid determination of nucleic acid sequences is increasing the number of sequences that are available. Inherent in a template or seed alignment is the culmination of structural and functional constraints that are selecting those mutations that are viable during the evolution of the RNA. While we might not understand these structural and functional, template-based alignment programs utilize the patterns of sequence conservation to encapsulate the characteristics of viable RNA sequences that are aligned properly. We have developed a program that utilizes the different dimensions of information in rCAD, a large RNA informatics resource, to establish a profile for each position in an alignment. The most significant include sequence identity and column composition in different phylogenetic taxa. We have compared our methods with a maximum of eight alternative alignment methods on different sets of 16S and 23S rRNA sequences with sequence percent identities ranging from 50% to 100%. The results showed that CRWAlign outperformed the other alignment methods in both speed and accuracy. A web-based alignment server is available at http://www.rna.ccbb.utexas.edu/SAE/2F/CRWAlign. PMID:24772376

  16. A Complete and Accurate Ab Initio Repeat Finding Algorithm.

    PubMed

    Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua

    2016-03-01

    It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy. PMID:26272474

  17. Liquid metal enabled pump

    PubMed Central

    Tang, Shi-Yang; Khoshmanesh, Khashayar; Sivan, Vijay; Petersen, Phred; O’Mullane, Anthony P.; Abbott, Derek; Mitchell, Arnan; Kalantar-zadeh, Kourosh

    2014-01-01

    Small-scale pumps will be the heartbeat of many future micro/nanoscale platforms. However, the integration of small-scale pumps is presently hampered by limited flow rate with respect to the input power, and their rather complicated fabrication processes. These issues arise as many conventional pumping effects require intricate moving elements. Here, we demonstrate a system that we call the liquid metal enabled pump, for driving a range of liquids without mechanical moving parts, upon the application of modest electric field. This pump incorporates a droplet of liquid metal, which induces liquid flow at high flow rates, yet with exceptionally low power consumption by electrowetting/deelectrowetting at the metal surface. We present theory explaining this pumping mechanism and show that the operation is fundamentally different from other existing pumps. The presented liquid metal enabled pump is both efficient and simple, and thus has the potential to fundamentally advance the field of microfluidics. PMID:24550485

  18. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  19. Enabling Wind Power Nationwide

    SciTech Connect

    Jose, Zayas; Michael, Derby; Patrick, Gilman; Ananthan, Shreyas; Lantz, Eric; Cotrell, Jason; Beck, Fredic; Tusing, Richard

    2015-05-01

    Leveraging this experience, the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office has evaluated the potential for wind power to generate electricity in all 50 states. This report analyzes and quantifies the geographic expansion that could be enabled by accessing higher above ground heights for wind turbines and considers the means by which this new potential could be responsibly developed.

  20. Bedrails: restraints or enablers?

    PubMed

    Mullette, Betty; Zulkowski, Karen

    2004-08-01

    Bedrails presently are used as both mobility restraints and enablers in long-term care facilities. As enablers, bedrails facilitate movement and may reduce the risk of pressure ulcer development. As restraints, they impede movement and may increase risk of ulcer development. Omnibus Budget Reconciliation Act regulations on restraint use have led to confusion for state Medicare surveyors and facilities regarding the definition of appropriate bedrail use and need for supportive documentation. Consequently, some facilities receive deficiency citations for inappropriate use or documentation while others do not. The purpose of this survey was to compare responses of Directors of Nursing in long-term care facilities and Medicare state surveyors to determine how each interprets the Omnibus Budget Reconciliation Act bedrail language for use and documentation. Questionnaires on bedrail use and documentation were sent to state surveyors and Directors of Nursing. One hundred, three (103) Directors of Nursing in 45 states and 65 surveyors from 39 states participated in the survey (response rate 61%). Study results demonstrated general acceptance of bedrail use as an enabler but not as a restraint by both Directors of Nursing and state surveyors. Four percent (4%) of Directors of Nursing reported receiving a citation for bedrail use and 59% of surveyors reported issuing citations for bedrail use. Significant differences were noted between the two groups regarding appropriate bedrail use and necessary documentation. The intent of Medicare guidelines and the Centers for Medicare and Medicaid Services is to standardize care for nursing home residents in the United States; yet, current regulations are open to individual interpretation by state surveyors and confusion exists between the intent of the Omnibus Budget Reconciliation Act and the daily operations of nursing homes. Educating clinicians about the risks and benefits of bedrail use, either as restraint or enabler, and

  1. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  2. Accurate, Automated Detection of Atrial Fibrillation in Ambulatory Recordings.

    PubMed

    Linker, David T

    2016-06-01

    A highly accurate, automated algorithm would facilitate cost-effective screening for asymptomatic atrial fibrillation. This study analyzed a new algorithm and compared it to existing techniques. The incremental benefit of each step in refinement of the algorithm was measured, and the algorithm was compared to other methods using the Physionet atrial fibrillation and normal sinus rhythm databases. When analyzing segments of 21 RR intervals or less, the algorithm had a significantly higher area under the receiver operating characteristic curve (AUC) than the other algorithms tested. At analysis segment sizes of up to 101 RR intervals, the algorithm continued to have a higher AUC than any of the other methods tested, although the difference from the second best other algorithm was no longer significant, with an AUC of 0.9992 with a 95% confidence interval (CI) of 0.9986-0.9998, vs. 0.9986 (CI 0.9978-0.9994). With identical per-subject sensitivity, per-subject specificity of the current algorithm was superior to the other tested algorithms even at 101 RR intervals, with no false positives (CI 0.0-0.8%) vs. 5.3% false positives for the second best algorithm (CI 3.4-7.9%). The described algorithm shows great promise for automated screening for atrial fibrillation by reducing false positives requiring manual review, while maintaining high sensitivity. PMID:26850411

  3. Algorithms and Sensors for Small Robot Path Following

    NASA Technical Reports Server (NTRS)

    Hogg, Robert W.; Rankin, Arturo L.; Roumeliotis, Stergios I.; McHenry, Michael C.; Helmick, Daniel M.; Bergh, Charles F.; Matthies, Larry

    2002-01-01

    Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute path following behaviors, with one or more followers tracking the path taken by a leader. The key challenges to enabling such a capability are (l) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the result of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.

  4. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  5. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  6. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  7. Smart Grid Enabled EVSE

    SciTech Connect

    None, None

    2014-10-15

    The combined team of GE Global Research, Federal Express, National Renewable Energy Laboratory, and Consolidated Edison has successfully achieved the established goals contained within the Department of Energy’s Smart Grid Capable Electric Vehicle Supply Equipment funding opportunity. The final program product, shown charging two vehicles in Figure 1, reduces by nearly 50% the total installed system cost of the electric vehicle supply equipment (EVSE) as well as enabling a host of new Smart Grid enabled features. These include bi-directional communications, load control, utility message exchange and transaction management information. Using the new charging system, Utilities or energy service providers will now be able to monitor transportation related electrical loads on their distribution networks, send load control commands or preferences to individual systems, and then see measured responses. Installation owners will be able to authorize usage of the stations, monitor operations, and optimally control their electricity consumption. These features and cost reductions have been developed through a total system design solution.

  8. Enable, mediate, advocate.

    PubMed

    Saan, Hans; Wise, Marilyn

    2011-12-01

    The authors of the Ottawa Charter selected the words enable, mediate and advocate to describe the core activities in what was, in 1986, the new Public Health. This article considers these concepts and the values and ideas upon which they were based. We discuss their relevance in the current context within which health promotion is being conducted, and discuss the implications of changes in the health agenda, media and globalization for practice. We consider developments within health promotion since 1986: its central role in policy rhetoric, the increasing understanding of complexities and the interlinkage with many other societal processes. So the three core activities are reviewed: they still fit well with the main health promotion challenges, but should be refreshed by new ideas and values. As the role of health promotion in the political arena has grown we have become part of the policy establishment and that is a mixed blessing. Making way for community advocates is now our challenge. Enabling requires greater sensitivity to power relations involved and an understanding of the role of health literacy. Mediating keeps its central role as it bridges vital interests of parties. We conclude that these core concepts in the Ottawa Charter need no serious revision. There are, however, lessons from the last 25 years that point to ways to address present and future challenges with greater sensitivity and effectiveness. We invite the next generation to avoid canonizing this text: as is true of every heritage, the heirs must decide on its use. PMID:22080073

  9. Enabling Computational Technologies for Terascale Scientific Simulations

    SciTech Connect

    Ashby, S.F.

    2000-08-24

    We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods for such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.

  10. NPOESS Tools for Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Route, G.; Grant, K. D.; Hughes, B.; Reed, B.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes both NPP and NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization is responsible for the algorithms that produce the EDRs, including their quality aspects. As the Calibration and Validation activities move forward following both the NPP launch and subsequent NPOESS launches, rapid algorithm updates may be required. Raytheon and Northrop Grumman have developed tools and processes to enable changes to be evaluated, tested, and moved into the operational baseline in a rapid and efficient manner. This presentation will provide an overview of the tools available to the Cal/Val teams to ensure rapid and accurate assessment of algorithm changes, along with the processes in place to ensure baseline integrity.

  11. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  12. Procrustes algorithm for multisensor track fusion

    NASA Astrophysics Data System (ADS)

    Fernandez, Manuel F.; Aridgides, Tom; Evans, John S., Jr.

    1990-10-01

    The association or "fusion" of multiple-sensor reports allows the generation of a highly accurate description of the environment by enabling efficient compression and processing of otherwise unwieldy quantities of data. Assuming that the observations from each sensor are aligned in feature space and in time, this association procedure may be executed on the basis of how well each sensor's vectors of observations match previously fused tracks. Unfortunately, distance-based algorithms alone do not suffice in those situations where match-assignments are not of an obvious nature (e.g., high target density or high false alarm rate scenarios). Our proposed approach is based on recognizing that, together, the sensors' observations and the fused tracks span a vector subspace whose dimensionality and singularity characteristics can be used to determine the total number of targets appearing across sensors. A properly constrained transformation can then be found which aligns the subspaces spanned individually by the observations and by the fused tracks, yielding the relationship existing between both sets of vectors ("Procrustes Problem"). The global nature of this approach thus enables fusing closely-spaced targets by treating them--in a manner analogous to PDA/JPDA algorithms - as clusters across sensors. Since our particular version of the Procrustes Problem consists basically of a minimization in the Total Least Squares sense, the resulting transformations associate both observations-to-tracks and tracks-to--observations. This means that the number of tracks being updated will increase or decrease depending on the number of targets present, automatically initiating or deleting "fused" tracks as required, without the need of ancillary procedures. In addition, it is implicitly assumed that both the tracker filters' target trajectory models and the sensors' observations are "noisy", yielding an algorithm robust even against maneuvering targets. Finally, owing to the fact

  13. Enabling graphene nanoelectronics.

    SciTech Connect

    Pan, Wei; Ohta, Taisuke; Biedermann, Laura Butler; Gutierrez, Carlos; Nolen, C. M.; Howell, Stephen Wayne; Beechem Iii, Thomas Edwin; McCarty, Kevin F.; Ross, Anthony Joseph, III

    2011-09-01

    Recent work has shown that graphene, a 2D electronic material amenable to the planar semiconductor fabrication processing, possesses tunable electronic material properties potentially far superior to metals and other standard semiconductors. Despite its phenomenal electronic properties, focused research is still required to develop techniques for depositing and synthesizing graphene over large areas, thereby enabling the reproducible mass-fabrication of graphene-based devices. To address these issues, we combined an array of growth approaches and characterization resources to investigate several innovative and synergistic approaches for the synthesis of high quality graphene films on technologically relevant substrate (SiC and metals). Our work focused on developing the fundamental scientific understanding necessary to generate large-area graphene films that exhibit highly uniform electronic properties and record carrier mobility, as well as developing techniques to transfer graphene onto other substrates.

  14. Algorithm To Design Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, Charles C.

    1988-01-01

    Way found to exploit Massey-Omura multiplication algorithm. Generalized algorithm locates normal basis in Galois filed GF(2m) and enables development of another algorithm to construct product function.

  15. Low resource processing algorithms for laser Doppler blood flow imaging.

    PubMed

    Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; He, Diwei; Morgan, Stephen P

    2011-07-01

    The emergence of full field laser Doppler blood flow imaging systems based on CMOS camera technology means that a large amount of data from each pixel in the image needs to be processed rapidly and system resources need to be used efficiently. Conventional processing algorithms that are utilized in single point or scanning systems are therefore not an ideal solution as they will consume too much system resource. Two processing algorithms that address this problem are described and efficiently implemented in a field programmable gate array. The algorithms are simple enough to use low system resource but effective enough to produce accurate flow measurements. This enables the processing unit to be integrated entirely in an embedded system, such as in an application-specific integrated circuit. The first algorithm uses a short Fourier transformation length (typically 8) but averages the output multiple times (typically 128). The second method utilizes an infinite impulse response filter with a low number of filter coefficients that operates in the time domain and has a frequency-weighted response. The algorithms compare favorably with the reference standard 1024 point fast Fourier transform in terms of both resource usage and accuracy. The number of data words per pixel that need to be stored for the algorithms is 1024 for the reference standard, 8 for the short length Fourier transform algorithm and 5 for the algorithm based on the infinite impulse response filter. Compared to the reference standard the error in the flow calculation is 1.3% for the short length Fourier transform algorithm and 0.7% for the algorithm based on the infinite impulse response filter. PMID:21316289

  16. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  17. Accurate permittivity measurements for microwave imaging via ultra-wideband removal of spurious reflectors.

    PubMed

    Pelletier, Mathew G; Viera, Joseph A; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties. PMID:22163668

  18. Accurate Permittivity Measurements for Microwave Imaging via Ultra-Wideband Removal of Spurious Reflectors

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties. PMID:22163668

  19. Enabling distributed petascale science.

    SciTech Connect

    Baranovski, A.; Bharathi, S.; Bresnahan, J.; chervenak, A.; Foster, I.; Fraser, D.; Freeman, T.; Gunter, D.; Jackson, K.; Keahey, K.; Kesselman, C.; Konerding, D. E.; Leroy, N.; Link, M.; Livny, M.; Miller, N.; Miller, R.; Oleynik, G.; Pearlman, L.; Schopf, J. M.; Schuler, R.; Tierney, B.; Mathematics and Computer Science; FNL; Univ. of Southern California; Univ. of Chicago; LBNL; Univ. of Wisconsin

    2007-01-01

    Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science.

  20. Enabling immersive simulation.

    SciTech Connect

    McCoy, Josh; Mateas, Michael; Hart, Derek H.; Whetzel, Jonathan; Basilico, Justin Derrick; Glickman, Matthew R.; Abbott, Robert G.

    2009-02-01

    The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

  1. Grid-Enabled Measures

    PubMed Central

    Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha

    2011-01-01

    Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586

  2. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  3. The accurate assessment of small-angle X-ray scattering data

    DOE PAGESBeta

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  4. The accurate assessment of small-angle X-ray scattering data

    SciTech Connect

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  5. The accurate assessment of small-angle X-ray scattering data

    PubMed Central

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne; Snell, Edward H.

    2015-01-01

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality. PMID:25615859

  6. Fast and accurate near-field - far-field transformation by sampling interpolation of plane-polar measurements

    NASA Astrophysics Data System (ADS)

    Bucci, Ovidio M.; Gennarelli, Claudio; Savarese, Catello

    1991-01-01

    An optimal sampling interpolation algorithm which allows the accurate recovery of plane-rectangular near-field samples from the knowledge of the plane-polar ones is developed. This enables the standard near field-far field (NF-FF) transformation, which takes full advantage of the FFT algorithm, to be applied to plane-polar scanning. The maximum allowable sample spacing is also rigorously derived, and it is shown that it can be significantly greater than lambda/2 as the measurement place moves away from the source. This allows a remarkable reduction of both measurement time and memory storage requirements. The sampling approach is compared with that based on the bivariate Lagrange interpolation (BLI) method. The sampling reconstruction agrees with the exact results significantly better than the BLI, in spite of the significantly lower number of required measurements.

  7. A MATLAB-based tool for accurate detection of perfect overlapping and nested inverted repeats in DNA sequences

    PubMed Central

    Sreeskandarajan, Sutharzan; Flowers, Michelle M.; Karro, John E.; Liang, Chun

    2014-01-01

    Summary: Palindromic sequences, or inverted repeats (IRs), in DNA sequences involve important biological processes such as DNA–protein binding, DNA replication and DNA transposition. Development of bioinformatics tools that are capable of accurately detecting perfect IRs can enable genome-wide studies of IR patterns in both prokaryotes and eukaryotes. Different from conventional string-comparison approaches, we propose a novel algorithm that uses a cumulative score system based on a prime number representation of nucleotide bases. We then implemented this algorithm as a MATLAB-based program for perfect IR detection. In comparison with other existing tools, our program demonstrates a high accuracy in detecting nested and overlapping IRs. Availability and implementation: The source code is freely available on (http://bioinfolab.miamioh.edu/bioinfolab/palindrome.php) Contact: liangc@miamioh.edu or karroje@miamioh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24215021

  8. Volume-preserving algorithm for secular relativistic dynamics of charged particles

    SciTech Connect

    Zhang, Ruili; Liu, Jian; Wang, Yulei; He, Yang; Qin, Hong; Sun, Yajuan

    2015-04-15

    Secular dynamics of relativistic charged particles has theoretical significance and a wide range of applications. However, conventional algorithms are not applicable to this problem due to the coherent accumulation of numerical errors. To overcome this difficulty, we develop a volume-preserving algorithm (VPA) with long-term accuracy and conservativeness via a systematic splitting method. Applied to the simulation of runaway electrons with a time-span over 10 magnitudes, the VPA generates accurate results and enables the discovery of new physics for secular runaway dynamics.

  9. Enabling cleanup technology transfer.

    SciTech Connect

    Ditmars, J. D.

    2002-08-12

    Technology transfer in the environmental restoration, or cleanup, area has been challenging. While there is little doubt that innovative technologies are needed to reduce the times, risks, and costs associated with the cleanup of federal sites, particularly those of the Departments of Energy (DOE) and Defense, the use of such technologies in actual cleanups has been relatively limited. There are, of course, many reasons why technologies do not reach the implementation phase or do not get transferred from developing entities to the user community. For example, many past cleanup contracts provided few incentives for performance that would compel a contractor to seek improvement via technology applications. While performance-based contracts are becoming more common, they alone will not drive increased technology applications. This paper focuses on some applications of cleanup methodologies and technologies that have been successful and are illustrative of a more general principle. The principle is at once obvious and not widely practiced. It is that, with few exceptions, innovative cleanup technologies are rarely implemented successfully alone but rather are implemented in the context of enabling processes and methodologies. And, since cleanup is conducted in a regulatory environment, the stage is better set for technology transfer when the context includes substantive interactions with the relevant stakeholders. Examples of this principle are drawn from Argonne National Laboratory's experiences in Adaptive Sampling and Analysis Programs (ASAPs), Precise Excavation, and the DOE Technology Connection (TechCon) Program. The lessons learned may be applicable to the continuing challenges posed by the cleanup and long-term stewardship of radioactive contaminants and unexploded ordnance (UXO) at federal sites.

  10. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  11. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates. PMID:26357094

  12. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  13. FOILFEST :community enabled security.

    SciTech Connect

    Moore, Judy Hennessey; Johnson, Curtis Martin; Whitley, John B.; Drayer, Darryl Donald; Cummings, John C., Jr.

    2005-09-01

    The Advanced Concepts Group of Sandia National Laboratories hosted a workshop, ''FOILFest: Community Enabled Security'', on July 18-21, 2005, in Albuquerque, NM. This was a far-reaching look into the future of physical protection consisting of a series of structured brainstorming sessions focused on preventing and foiling attacks on public places and soft targets such as airports, shopping malls, hotels, and public events. These facilities are difficult to protect using traditional security devices since they could easily be pushed out of business through the addition of arduous and expensive security measures. The idea behind this Fest was to explore how the public, which is vital to the function of these institutions, can be leveraged as part of a physical protection system. The workshop considered procedures, space design, and approaches for building community through technology. The workshop explored ways to make the ''good guys'' in public places feel safe and be vigilant while making potential perpetrators of harm feel exposed and convinced that they will not succeed. Participants in the Fest included operators of public places, social scientists, technology experts, representatives of government agencies including DHS and the intelligence community, writers and media experts. Many innovative ideas were explored during the fest with most of the time spent on airports, including consideration of the local airport, the Albuquerque Sunport. Some provocative ideas included: (1) sniffers installed in passage areas like revolving door, escalators, (2) a ''jumbotron'' showing current camera shots in the public space, (3) transparent portal screeners allowing viewing of the screening, (4) a layered open/funnel/open/funnel design where open spaces are used to encourage a sense of ''communitas'' and take advantage of citizen ''sensing'' and funnels are technological tunnels of sensors (the tunnels of truth), (5) curved benches with blast proof walls or backs, (6

  14. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  15. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  16. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  17. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  18. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  19. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  20. Enabling R&D for accurate simulation of non-ideal explosives.

    SciTech Connect

    Aidun, John Bahram; Thompson, Aidan Patrick; Schmitt, Robert Gerard

    2010-09-01

    We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.

  1. Enabling accurate photodiode detection of multiple optical traps by spatial filtering

    NASA Astrophysics Data System (ADS)

    Ott, Dino; Reihani, S. Nader S.; Oddershede, Lene B.

    2014-09-01

    Dual and multiple beam optical tweezers allow for advanced trapping geometries beyond single traps, however, these increased manipulation capabilities, usually complicate the detection of position and force. The accuracy of position and force measurements is often compromised by crosstalk between the detected signals, this crosstalk leading to a systematic error on the measured forces and distances. In dual-beam optical trapping setups, the two traps are typically orthogonal polarized and crosstalk can be minimized by inserting polarization optics in front of the detector, however, this method is not perfect because of the de-polarization of the trapping beam introduced by the required high numerical aperture optics. Moreover, the restriction to two orthogonal polarisation states limits the number of detectable traps to two. Here, we present an easy-to-implement simple method to efficiently eliminate cross-talk in dual beam setups.1 The technique is based on spatial filtering and is highly compatible with standard back-focal-plane photodiode based detection. The reported method significantly improves the accuracy of force-distance measurements, e.g., of single molecules, hence providing much more scientific value for the experimental efforts. Furthermore, it opens the possibility for fast and simultaneous photodiode based detection of multiple holographically generated optical traps.

  2. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method

    SciTech Connect

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  3. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  4. Camera-enabled techniques for organic synthesis

    PubMed Central

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  5. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  6. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  7. Robust, accurate and fast automatic segmentation of the spinal cord.

    PubMed

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord. PMID:24780696

  8. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  9. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  10. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  11. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  12. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  13. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  14. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  15. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  16. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  17. Toward genome-enabled mycology.

    PubMed

    Hibbett, David S; Stajich, Jason E; Spatafora, Joseph W

    2013-01-01

    Genome-enabled mycology is a rapidly expanding field that is characterized by the pervasive use of genome-scale data and associated computational tools in all aspects of fungal biology. Genome-enabled mycology is integrative and often requires teams of researchers with diverse skills in organismal mycology, bioinformatics and molecular biology. This issue of Mycologia presents the first complete fungal genomes in the history of the journal, reflecting the ongoing transformation of mycology into a genome-enabled science. Here, we consider the prospects for genome-enabled mycology and the technical and social challenges that will need to be overcome to grow the database of complete fungal genomes and enable all fungal biologists to make use of the new data. PMID:23928422

  18. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  19. Technology Enabling the First 100 Exoplanets

    NASA Astrophysics Data System (ADS)

    Marcy, Geoffrey W.

    2014-01-01

    The discoveries of the first 100 exoplanets by precise radial velocities in the late 1990's at Lick Observatory and Observatoire de Haute-Provence were enabled by several technological advances and a cultural one. A key ingredient was a cross-dispersed echelle spectrometer at a stable, coude focus, with a CCD detector, offering high spectral resolution, large wavelength coverage, and a linear response to photons. A second ingredient was a computer capable of storing the megabyte images from such spectrometers and analyzing them for Doppler shifts. Both Lick and OHP depended on these advents. A third ingredient was a stable wavelength calibration. Here, two technologies emerged independently, with iodine gas employed by Marcy's group (used first by solar physicists doing helioseismology) and simultaneous thorium-argon spectra (enabled by fiber optics) used by Mayor's group. A final ingredient was a new culture emerging in the 1990's of forward-modeling of spectra on computers, enabled by the well-behaved photon noise of CCDs, giving Poisson errors amenable to rigorous statistical algorithms for measuring millipixel Doppler shifts. The prospect of detecting the 12 meter/sec reflex velocity (1/100 pixel) of a Jupiter-like planet was considered impossible, except to a few who asked, "What actually limits Doppler precision?". Inspired insights were provided by Robert Howard, Paul Schechter, Bruce Campbell, and Gordon Walker, leading to the first 100 exoplanets.

  20. An algorithm to discover gene signatures with predictive potential

    PubMed Central

    2010-01-01

    Background The advent of global gene expression profiling has generated unprecedented insight into our molecular understanding of cancer, including breast cancer. For example, human breast cancer patients display significant diversity in terms of their survival, recurrence, metastasis as well as response to treatment. These patient outcomes can be predicted by the transcriptional programs of their individual breast tumors. Predictive gene signatures allow us to correctly classify human breast tumors into various risk groups as well as to more accurately target therapy to ensure more durable cancer treatment. Results Here we present a novel algorithm to generate gene signatures with predictive potential. The method first classifies the expression intensity for each gene as determined by global gene expression profiling as low, average or high. The matrix containing the classified data for each gene is then used to score the expression of each gene based its individual ability to predict the patient characteristic of interest. Finally, all examined genes are ranked based on their predictive ability and the most highly ranked genes are included in the master gene signature, which is then ready for use as a predictor. This method was used to accurately predict the survival outcomes in a cohort of human breast cancer patients. Conclusions We confirmed the capacity of our algorithm to generate gene signatures with bona fide predictive ability. The simplicity of our algorithm will enable biological researchers to quickly generate valuable gene signatures without specialized software or extensive bioinformatics training. PMID:20813028

  1. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Nett, Brian E.; Chen, Guang-Hong

    2009-10-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  2. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  3. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  4. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    Northrop Grumman have developed tools and processes to enable changes to be evaluated, tested, and moved into the operational baseline in a rapid and efficient manner. This presentation will provide an overview of the tools available to the Cal/Val teams to ensure rapid and accurate assessment of algorithm changes, along with the processes in place to ensure baseline integrity. [1] K. Grant and G. Route, "JPSS CGS Tools for Rapid Algorithm Updates," NOAA 2011 Satellite Direct Readout Conference, Miami FL, Poster, Apr 2011. [2] K. Grant, G. Route and B. Reed, "JPSS CGS Tools for Rapid Algorithm Updates," AMS 91st Annual Meeting, Seattle WA, Poster, Jan 2011. [3] K. Grant, G. Route and B. Reed, "JPSS CGS Tools for Rapid Algorithm Updates," AGU 2010 Fall Meeting, San Francisco CA, Oral Presentation, Dec 2010.

  5. PRIMAL: Fast and Accurate Pedigree-based Imputation from Sequence Data in a Founder Population

    PubMed Central

    Livne, Oren E.; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L.

    2015-01-01

    Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost. PMID:25735005

  6. Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.

    2016-01-01

    One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.

  7. Accurate mask registration on tilted lines for 6F2 DRAM manufacturing

    NASA Astrophysics Data System (ADS)

    Roeth, K. D.; Choi, W.; Lee, Y.; Kim, S.; Yim, D.; Laske, F.; Ferber, M.; Daneshpanah, M.; Kwon, E.

    2015-10-01

    193nm immersion lithography is the mainstream production technology for the 22nm half pitch (HP) DRAM manufacturing. Considering multi-patterning as the technology to solve the very low k1 situation in the resolution equation puts extreme pressure on the intra-field overlay, to which mask registration error may be a significant error contributor [3]. The International Technology Roadmap for Semiconductors (ITRS [1]) requests a registration error below 4 nm for each mask of a multi-patterning set forming one layer on the wafer. For mask metrology at the 22nm HP node, maintaining a precision-to-tolerance (P/T) ratio below 0.25 will be very challenging. Mask registration error impacts intra-field wafer overlay directly and has a major impact on wafer yield. DRAM makers moved several years ago to 6F2 (figure 1, [2]) cell design and thus printing tilted lines at 15 or 30 degree. Overlay of contact layer over buried line has to be well controlled. However, measuring mask registration performance accurately on tilted lines was a challenge. KLA Tencor applied the model-based algorithm to enable the accurate registration measurement of tilted lines on the Poly layer as well as the mask-to-mask overlay to the adjacent contact layers. The metrology solution is discussed and measurement results are provided.

  8. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  9. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  10. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  11. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  12. Enabling Space Science and Exploration

    NASA Technical Reports Server (NTRS)

    Weber, William J.

    2006-01-01

    This viewgraph presentation on enabling space science and exploration covers the following topics: 1) Today s Deep Space Network; 2) Next Generation Deep Space Network; 3) Needed technologies; 4) Mission IT and networking; and 5) Multi-mission operations.

  13. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  14. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. Note-accurate audio segmentation based on MPEG-7

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens

    2003-12-01

    Segmenting audio data into the smallest musical components is the basis for many further meta data extraction algorithms. For example, an automatic music transcription system needs to know where the exact boundaries of each tone are. In this paper a note accurate audio segmentation algorithm based on MPEG-7 low level descriptors is introduced. For a reliable detection of different notes, both features in the time and the frequency domain are used. Because of this, polyphonic instrument mixes and even melodies characterized by human voices can be examined with this alogrithm. For testing and verification of the note accurate segmentation, a simple music transcription system was implemented. The dominant frequency within each segment is used to build a MIDI file representing the processed audio data.

  16. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  17. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  18. Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms

    NASA Astrophysics Data System (ADS)

    Brunner, Christopher W.; Lu, Ping

    2012-09-01

    The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.

  19. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  20. Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits

    PubMed Central

    Ljungberg, Kajsa; Mishchenko, Kateryna; Holmgren, Sverker

    2010-01-01

    We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called quantitative trait loci (QTL), and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms. PMID:21918629

  1. Automatic Determination of Validity of Input Data Used in Ellipsoid Fitting MARG Calibration Algorithms

    PubMed Central

    Olivares, Alberto; Ruiz-Garcia, Gonzalo; Olivares, Gonzalo; Górriz, Juan Manuel; Ramirez, Javier

    2013-01-01

    Ellipsoid fitting algorithms are widely used to calibrate Magnetic Angular Rate and Gravity (MARG) sensors. These algorithms are based on the minimization of an error function that optimizes the parameters of a mathematical sensor model that is subsequently applied to calibrate the raw data. The convergence of this kind of algorithms to a correct solution is very sensitive to input data. Input calibration datasets must be properly distributed in space so data can be accurately fitted to the theoretical ellipsoid model. Gathering a well distributed set is not an easy task as it is difficult for the operator carrying out the maneuvers to keep a visual record of all the positions that have already been covered, as well as the remaining ones. It would be then desirable to have a system that gives feedback to the operator when the dataset is ready, or to enable the calibration process in auto-calibrated systems. In this work, we propose two different algorithms that analyze the goodness of the distributions by computing four different indicators. The first approach is based on a thresholding algorithm that uses only one indicator as its input and the second one is based on a Fuzzy Logic System (FLS) that estimates the calibration error for a given calibration set using a weighted combination of two indicators. Very accurate classification between valid and invalid datasets is achieved with average Area Under Curve (AUC) of up to 0.98. PMID:24013490

  2. Predicting polymeric crystal structures by evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Zhu, Qiang; Sharma, Vinit; Oganov, Artem R.; Ramprasad, Ramamurthy

    2014-10-01

    The recently developed evolutionary algorithm USPEX proved to be a tool that enables accurate and reliable prediction of structures. Here we extend this method to predict the crystal structure of polymers by constrained evolutionary search, where each monomeric unit is treated as a building block with fixed connectivity. This greatly reduces the search space and allows the initial structure generation with different sequences and packings of these blocks. The new constrained evolutionary algorithm is successfully tested and validated on a diverse range of experimentally known polymers, namely, polyethylene, polyacetylene, poly(glycolic acid), poly(vinyl chloride), poly(oxymethylene), poly(phenylene oxide), and poly (p-phenylene sulfide). By fixing the orientation of polymeric chains, this method can be further extended to predict the structures of complex linear polymers, such as all polymorphs of poly(vinylidene fluoride), nylon-6 and cellulose. The excellent agreement between predicted crystal structures and experimentally known structures assures a major role of this approach in the efficient design of the future polymeric materials.

  3. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  4. Enabling a New Planning and Scheduling Paradigm

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth

    2004-01-01

    The Flight Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called "tasks models," from the scientists and technologists for the tasks that they want to be done. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, another cadre further modifies the models to be compatible with the scheduling engine. This last cadre also submits the models to the scheduling engine or builds the timeline manually to accommodate requirements that are expressed in notes. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components. (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphics methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models

  5. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  6. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  7. Probability tree algorithm for general diffusion processes

    NASA Astrophysics Data System (ADS)

    Ingber, Lester; Chen, Colleen; Mondescu, Radu Paul; Muzzall, David; Renedo, Marco

    2001-11-01

    Motivated by path-integral numerical solutions of diffusion processes, PATHINT, we present a tree algorithm, PATHTREE, which permits extremely fast accurate computation of probability distributions of a large class of general nonlinear diffusion processes.

  8. Health-Enabled Smart Sensor Fusion Technology

    NASA Technical Reports Server (NTRS)

    Wang, Ray

    2012-01-01

    A process was designed to fuse data from multiple sensors in order to make a more accurate estimation of the environment and overall health in an intelligent rocket test facility (IRTF), to provide reliable, high-confidence measurements for a variety of propulsion test articles. The object of the technology is to provide sensor fusion based on a distributed architecture. Specifically, the fusion technology is intended to succeed in providing health condition monitoring capability at the intelligent transceiver, such as RF signal strength, battery reading, computing resource monitoring, and sensor data reading. The technology also provides analytic and diagnostic intelligence at the intelligent transceiver, enhancing the IEEE 1451.x-based standard for sensor data management and distributions, as well as providing appropriate communications protocols to enable complex interactions to support timely and high-quality flow of information among the system elements.

  9. The network-enabled optimization system server

    SciTech Connect

    Mesnier, M.P.

    1995-08-01

    Mathematical optimization is a technology under constant change and advancement, drawing upon the most efficient and accurate numerical methods to date. Further, these methods can be tailored for a specific application or generalized to accommodate a wider range of problems. This perpetual change creates an ever growing field, one that is often difficult to stay abreast of. Hence, the impetus behind the Network-Enabled Optimization System (NEOS) server, which aims to provide users, both novice and expert, with a guided tour through the expanding world of optimization. The NEOS server is responsible for bridging the gap between users and the optimization software they seek. More specifically, the NEOS server will accept optimization problems over the Internet and return a solution to the user either interactively or by e-mail. This paper discusses the current implementation of the server.

  10. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  11. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  12. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  13. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  14. Towards accurate and automatic morphing

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Sharkey, Paul M.

    2005-10-01

    Image morphing has proved to be a powerful tool for generating compelling and pleasing visual effects and has been widely used in entertainment industry. However, traditional image morphing methods suffer from a number of drawbacks: feature specification between images is tedious and the reliance on 2D information ignores the possible advantages to be gained from 3D knowledge. In this paper, we utilize recent advantages of computer vision technologies to diminish these drawbacks. By analyzing multi view geometry theories, we propose a processing pipeline based on three reference images. We first seek a few seed correspondences using robust methods and then recover multi view geometries using the seeds, through bundle adjustment. Guided by the recovered two and three view geometries, a novel line matching algorithm across three views is then deduced, through edge growth, line fitting and two and three view geometry constraints. Corresponding lines on a novel image is then obtained by an image transfer method and finally matched lines are fed into the traditional morphing methods and novel images are generated. Novel images generated by this pipeline have advantages over traditional morphing methods: they have an inherent 3D foundation and are therefore physically close to real scenes; not only images located between the baseline connecting two reference image centers, but also extrapolated images away from the baseline are possible; and the whole processing can be either wholly automatic, or at least the tedious task of feature specification in traditional morphing methods can be greatly relieved.

  15. Towards an accurate bioimpedance identification

    NASA Astrophysics Data System (ADS)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  16. Enable: Developing Instructional Language Skills.

    ERIC Educational Resources Information Center

    Witt, Beth

    The program presented in this manual provides a structure and activities for systematic development of effective listening comprehension in typical and atypical children. The complete ENABLE kit comes with pictures, cut-outs, and puppets to illustrate the directives, questions, and narrative activities. The manual includes an organizational and…

  17. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    PubMed

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  18. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  19. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track

  20. A Stemming Algorithm for Latin Text Databases.

    ERIC Educational Resources Information Center

    Schinke, Robyn; And Others

    1996-01-01

    Describes the design of a stemming algorithm for searching Latin text databases. The algorithm uses a longest-match approach with some recoding but differs from most stemmers in its use of two separate suffix dictionaries for processing query and database words that enables users to pursue specific searches for single grammatical forms of words.…

  1. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a

  2. Exogenous Attention Enables Perceptual Learning

    PubMed Central

    Szpiro, Sarit F. A.; Carrasco, Marisa

    2015-01-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745

  3. Nanofluidics: enabling processes for biotech

    NASA Astrophysics Data System (ADS)

    Ulmanella, Umberto; Ho, Chih-Ming

    2001-10-01

    The advance of micro and nanodevice manufacturing technology enables us to carry out biological and chemical processes in a more efficient manner. In fact, fluidic processes connect the macro and the micro/nano worlds. For devices approaching the size of the fluid molecules, many physical phenomena occur that are not observed in macro flows. In this brief review, we discuss a few selected topics which of are interest for basic research and are important for applications in biotechnology.

  4. Echo-Enabled Harmonic Generation

    SciTech Connect

    Stupakov, Gennady; /SLAC

    2012-06-28

    A recently proposed concept of the Echo-Enabled Harmonic Generation (EEHG) FEL uses two laser modulators in combination with two dispersion sections to generate a high-harmonic density modulation in a relativistic beam. This seeding technique holds promise of a one-stage soft x-ray FEL that radiates not only transversely but also longitudinally coherent pulses. Currently, an experimental verification of the concept is being conducted at the SLAC National Accelerator Laboratory aimed at the demonstration of the EEHG.

  5. Technologies for Networked Enabled Operations

    NASA Technical Reports Server (NTRS)

    Glass, B.; Levine, J.

    2005-01-01

    Current point-to-point data links will not scale to support future integration of surveillance, security, and globally-distributed air traffic data, and already hinders efficiency and capacity. While the FAA and industry focus on a transition to initial system-wide information management (SWIM) capabilities, this paper describes a set of initial studies of NAS network-enabled operations technology gaps targeted for maturity in later SWIM spirals (201 5-2020 timeframe).

  6. MEMS: Enabled Drug Delivery Systems.

    PubMed

    Cobo, Angelica; Sheybani, Roya; Meng, Ellis

    2015-05-01

    Drug delivery systems play a crucial role in the treatment and management of medical conditions. Microelectromechanical systems (MEMS) technologies have allowed the development of advanced miniaturized devices for medical and biological applications. This Review presents the use of MEMS technologies to produce drug delivery devices detailing the delivery mechanisms, device formats employed, and various biomedical applications. The integration of dosing control systems, examples of commercially available microtechnology-enabled drug delivery devices, remaining challenges, and future outlook are also discussed. PMID:25703045

  7. New Generation Sensor Web Enablement

    PubMed Central

    Bröring, Arne; Echterhoff, Johannes; Jirka, Simon; Simonis, Ingo; Everding, Thomas; Stasch, Christoph; Liang, Steve; Lemmens, Rob

    2011-01-01

    Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. PMID:22163760

  8. 'Ethos' Enabling Organisational Knowledge Creation

    NASA Astrophysics Data System (ADS)

    Matsudaira, Yoshito

    This paper examines knowledge creation in relation to improvements on the production line in the manufacturing department of Nissan Motor Company and aims to clarify embodied knowledge observed in the actions of organisational members who enable knowledge creation will be clarified. For that purpose, this study adopts an approach that adds a first, second, and third-person's viewpoint to the theory of knowledge creation. Embodied knowledge, observed in the actions of organisational members who enable knowledge creation, is the continued practice of 'ethos' (in Greek) founded in Nissan Production Way as an ethical basis. Ethos is knowledge (intangible) assets for knowledge creating companies. Substantiated analysis classifies ethos into three categories: the individual, team and organisation. This indicates the precise actions of the organisational members in each category during the knowledge creation process. This research will be successful in its role of showing the indispensability of ethos - the new concept of knowledge assets, which enables knowledge creation -for future knowledge-based management in the knowledge society.

  9. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  10. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  11. An onboard star identification algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Kong; Femiano, Michael

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  12. An onboard star identification algorithm

    NASA Technical Reports Server (NTRS)

    Ha, Kong; Femiano, Michael

    1993-01-01

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  18. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  19. Optimized microsystems-enabled photovoltaics

    SciTech Connect

    Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.

    2015-09-22

    Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.

  20. Variance Based Measure for Optimization of Parametric Realignment Algorithms

    PubMed Central

    Mehring, Carsten

    2016-01-01

    Neuronal responses to sensory stimuli or neuronal responses related to behaviour are often extracted by averaging neuronal activity over large number of experimental trials. Such trial-averaging is carried out to reduce noise and to diminish the influence of other signals unrelated to the corresponding stimulus or behaviour. However, if the recorded neuronal responses are jittered in time with respect to the corresponding stimulus or behaviour, averaging over trials may distort the estimation of the underlying neuronal response. Temporal jitter between single trial neural responses can be partially or completely removed using realignment algorithms. Here, we present a measure, named difference of time-averaged variance (dTAV), which can be used to evaluate the performance of a realignment algorithm without knowing the internal triggers of neural responses. Using simulated data, we show that using dTAV to optimize the parameter values for an established parametric realignment algorithm improved its efficacy and, therefore, reduced the jitter of neuronal responses. By removing the jitter more effectively and, therefore, enabling more accurate estimation of neuronal responses, dTAV can improve analysis and interpretation of the neural responses. PMID:27159490

  1. Directory Enabled Policy Based Networking

    SciTech Connect

    KELIIAA, CURTIS M.

    2001-10-01

    This report presents a discussion of directory-enabled policy-based networking with an emphasis on its role as the foundation for securely scalable enterprise networks. A directory service provides the object-oriented logical environment for interactive cyber-policy implementation. Cyber-policy implementation includes security, network management, operational process and quality of service policies. The leading network-technology vendors have invested in these technologies for secure universal connectivity that transverses Internet, extranet and intranet boundaries. Industry standards are established that provide the fundamental guidelines for directory deployment scalable to global networks. The integration of policy-based networking with directory-service technologies provides for intelligent management of the enterprise network environment as an end-to-end system of related clients, services and resources. This architecture allows logical policies to protect data, manage security and provision critical network services permitting a proactive defense-in-depth cyber-security posture. Enterprise networking imposes the consideration of supporting multiple computing platforms, sites and business-operation models. An industry-standards based approach combined with principled systems engineering in the deployment of these technologies allows these issues to be successfully addressed. This discussion is focused on a directory-based policy architecture for the heterogeneous enterprise network-computing environment and does not propose specific vendor solutions. This document is written to present practical design methodology and provide an understanding of the risks, complexities and most important, the benefits of directory-enabled policy-based networking.

  2. Nanomaterial-Enabled Neural Stimulation.

    PubMed

    Wang, Yongchen; Guo, Liang

    2016-01-01

    Neural stimulation is a critical technique in treating neurological diseases and investigating brain functions. Traditional electrical stimulation uses electrodes to directly create intervening electric fields in the immediate vicinity of neural tissues. Second-generation stimulation techniques directly use light, magnetic fields or ultrasound in a non-contact manner. An emerging generation of non- or minimally invasive neural stimulation techniques is enabled by nanotechnology to achieve a high spatial resolution and cell-type specificity. In these techniques, a nanomaterial converts a remotely transmitted primary stimulus such as a light, magnetic or ultrasonic signal to a localized secondary stimulus such as an electric field or heat to stimulate neurons. The ease of surface modification and bio-conjugation of nanomaterials facilitates cell-type-specific targeting, designated placement and highly localized membrane activation. This review focuses on nanomaterial-enabled neural stimulation techniques primarily involving opto-electric, opto-thermal, magneto-electric, magneto-thermal and acousto-electric transduction mechanisms. Stimulation techniques based on other possible transduction schemes and general consideration for these emerging neurotechnologies are also discussed. PMID:27013938

  3. Enabling Exploration Through Docking Standards

    NASA Technical Reports Server (NTRS)

    Hatfield, Caris A.

    2012-01-01

    Human exploration missions beyond low earth orbit will likely require international cooperation in order to leverage limited resources. International standards can help enable cooperative missions by providing well understood, predefined interfaces allowing compatibility between unique spacecraft and systems. The International Space Station (ISS) partnership has developed a publicly available International Docking System Standard (IDSS) that provides a solution to one of these key interfaces by defining a common docking interface. The docking interface provides a way for even dissimilar spacecraft to dock for exchange of crew and cargo, as well as enabling the assembly of large space systems. This paper provides an overview of the key attributes of the IDSS, an overview of the NASA Docking System (NDS), and the plans for updating the ISS with IDSS compatible interfaces. The NDS provides a state of the art, low impact docking system that will initially be made available to commercial crew and cargo providers. The ISS will be used to demonstrate the operational utility of the IDSS interface as a foundational technology for cooperative exploration.

  4. Nanomaterial-Enabled Neural Stimulation

    PubMed Central

    Wang, Yongchen; Guo, Liang

    2016-01-01

    Neural stimulation is a critical technique in treating neurological diseases and investigating brain functions. Traditional electrical stimulation uses electrodes to directly create intervening electric fields in the immediate vicinity of neural tissues. Second-generation stimulation techniques directly use light, magnetic fields or ultrasound in a non-contact manner. An emerging generation of non- or minimally invasive neural stimulation techniques is enabled by nanotechnology to achieve a high spatial resolution and cell-type specificity. In these techniques, a nanomaterial converts a remotely transmitted primary stimulus such as a light, magnetic or ultrasonic signal to a localized secondary stimulus such as an electric field or heat to stimulate neurons. The ease of surface modification and bio-conjugation of nanomaterials facilitates cell-type-specific targeting, designated placement and highly localized membrane activation. This review focuses on nanomaterial-enabled neural stimulation techniques primarily involving opto-electric, opto-thermal, magneto-electric, magneto-thermal and acousto-electric transduction mechanisms. Stimulation techniques based on other possible transduction schemes and general consideration for these emerging neurotechnologies are also discussed. PMID:27013938

  5. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  6. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  7. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  8. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  9. [Fast and accurate extraction of ring-down time in cavity ring-down spectroscopy].

    PubMed

    Wang, Dan; Hu, Ren-Zhi; Xie, Pin-Hua; Qin, Min; Ling, Liu-Yi; Duan, Jun

    2014-10-01

    Research is conducted to accurate and efficient algorithms for extracting ring-down time (r) in cavity ring-down spectroscopy (CRDS) which is used to measure NO3 radical in the atmosphere. Fast and accurate extraction of ring-down time guarantees more precise and higher speed of measurement. In this research, five kinds of commonly used algorithms are selected to extract ring-down time which respectively are fast Fourier transform (FFT) algorithm, discrete Fourier transform (DFT) algorithm, linear regression of the sum (LRS) algorithm, Levenberg-Marquardt (LM) algorithm and least squares (LS) algorithm. Simulated ring-down signals with various amplitude levels of white noises are fitted by using five kinds of the above-mentioned algorithms, and comparison and analysis is conducted to the fitting results of five kinds of algorithms from four respects: the vulnerability to noises, the accuracy and precision of the fitting, the speed of the fitting and preferable fitting ring-down signal waveform length The research results show that Levenberg-Marquardt algorithm and linear regression of the sum algorithm are able to provide more precise results and prove to have higher noises immunity, and by comparison, the fitting speed of Leven- berg-Marquardt algorithm turns out to be slower. In addition, by analysis of simulated ring-down signals, five to ten times of ring-down time is selected to be the best fitting waveform length because in this case, standard deviation of fitting results of five kinds of algorithms proves to be the minimum. External modulation diode laser and cavity which consists of two high reflectivity mirrors are used to construct a cavity ring-down spectroscopy detection system. According to our experimental conditions, in which the noise level is 0.2%, linear regression of the sum algorithm and Levenberg-Marquardt algorithm are selected to process experimental data. The experimental results show that the accuracy and precision of linear regression of

  10. Accurate and robust estimation of camera parameters using RANSAC

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He

    2013-03-01

    Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.

  11. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  12. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  13. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  14. An integrative variant analysis pipeline for accurate genotype/haplotype inference in population NGS data.

    PubMed

    Wang, Yi; Lu, James; Yu, Jin; Gibbs, Richard A; Yu, Fuli

    2013-05-01

    Next-generation sequencing is a powerful approach for discovering genetic variation. Sensitive variant calling and haplotype inference from population sequencing data remain challenging. We describe methods for high-quality discovery, genotyping, and phasing of SNPs for low-coverage (approximately 5×) sequencing of populations, implemented in a pipeline called SNPTools. Our pipeline contains several innovations that specifically address challenges caused by low-coverage population sequencing: (1) effective base depth (EBD), a nonparametric statistic that enables more accurate statistical modeling of sequencing data; (2) variance ratio scoring, a variance-based statistic that discovers polymorphic loci with high sensitivity and specificity; and (3) BAM-specific binomial mixture modeling (BBMM), a clustering algorithm that generates robust genotype likelihoods from heterogeneous sequencing data. Last, we develop an imputation engine that refines raw genotype likelihoods to produce high-quality phased genotypes/haplotypes. Designed for large population studies, SNPTools' input/output (I/O) and storage aware design leads to improved computing performance on large sequencing data sets. We apply SNPTools to the International 1000 Genomes Project (1000G) Phase 1 low-coverage data set and obtain genotyping accuracy comparable to that of SNP microarray. PMID:23296920

  15. Dual-beam interferometer for the accurate determination of surface-wave velocity.

    PubMed

    McKie, A D; Wagner, J W; Spicer, J B; Deaton, J B

    1991-10-01

    A novel dual-beam interferometer has been designed and constructed that enables two beams from a He-Ne laser to probe remotely the surface of a material. The separation of the two He-Ne beams is adjustable in the 15-to- 40-mm range with a spatial resolution of 2 microm. Surface-acoustic-wave measurements have been performed with two different probe separations so that the travel time for the surface waves over a known distance can be determined accurately. With the aid of autocorrelation algorithms, the Rayleigh pulse velocity on 7075-T651 aluminum has been measured to be 2888 +/- 4 m/s. The current precision of the system is limited mainly by the 10-ns sampling rate of the digital oscilloscope used. Rayleigh pulse interactions with a surface-breaking slot, machined to a nominal depth of 0.5 mm, have also been examined and the depth estimated ultrasonically to be 0.49 +/- 0.02 mm. The system may also provide a technique for direct quantitative studies of surface-wave attenuation. PMID:20706500

  16. Accurate and Sensitive Peptide Identification with Mascot Percolator

    PubMed Central

    Brosch, Markus; Yu, Lu; Hubbard, Tim; Choudhary, Jyoti

    2009-01-01

    Sound scoring methods for sequence database search algorithms such as Mascot and Sequest are essential for sensitive and accurate peptide and protein identifications from proteomic tandem mass spectrometry data. In this paper, we present a software package that interfaces Mascot with Percolator, a well performing machine learning method for rescoring database search results, and demonstrate it to be amenable for both low and high accuracy mass spectrometry data, outperforming all available Mascot scoring schemes as well as providing reliable significance measures. Mascot Percolator can be readily used as a stand alone tool or integrated into existing data analysis pipelines. PMID:19338334

  17. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.

    PubMed

    Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo

    2016-05-15

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. PMID:26931817

  18. Rotational propulsion enabled by inertia.

    PubMed

    Nadal, François; Pak, On Shun; Zhu, LaiLai; Brandt, Luca; Lauga, Eric

    2014-07-01

    The fluid mechanics of small-scale locomotion has recently attracted considerable attention, due to its importance in cell motility and the design of artificial micro-swimmers for biomedical applications. Most studies on the topic consider the ideal limit of zero Reynolds number. In this paper, we investigate a simple propulsion mechanism --an up-down asymmetric dumbbell rotating about its axis of symmetry-- unable to propel in the absence of inertia in a Newtonian fluid. Inertial forces lead to continuous propulsion for all finite values of the Reynolds number. We study computationally its propulsive characteristics as well as analytically in the small-Reynolds-number limit. We also derive the optimal dumbbell geometry. The direction of propulsion enabled by inertia is opposite to that induced by viscoelasticity. PMID:25034393

  19. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  20. Context-Enabled Business Intelligence

    SciTech Connect

    Troy Hiltbrand

    2012-04-01

    To truly understand context and apply it in business intelligence, it is vital to understand what context is and how it can be applied in addressing organizational needs. Context describes the facets of the environment that impact the way that end users interact with the system. Context includes aspects of location, chronology, access method, demographics, social influence/ relationships, end-user attitude/ emotional state, behavior/ past behavior, and presence. To be successful in making Business Intelligence content enabled, it is important to be able to capture the context of use user. With advances in technology, there are a number of ways in which this user based information can be gathered and exposed to enhance the overall end user experience.

  1. Smart Sensors Enable Smart Air Conditioning Control

    PubMed Central

    Cheng, Chin-Chi; Lee, Dasheng

    2014-01-01

    In this study, mobile phones, wearable devices, temperature and human motion detectors are integrated as smart sensors for enabling smart air conditioning control. Smart sensors obtain feedback, especially occupants' information, from mobile phones and wearable devices placed on human body. The information can be used to adjust air conditioners in advance according to humans' intentions, in so-called intention causing control. Experimental results show that the indoor temperature can be controlled accurately with errors of less than ±0.1 °C. Rapid cool down can be achieved within 2 min to the optimized indoor capacity after occupants enter a room. It's also noted that within two-hour operation the total compressor output of the smart air conditioner is 48.4% less than that of the one using On-Off control. The smart air conditioner with wearable devices could detect the human temperature and activity during sleep to determine the sleeping state and adjusting the sleeping function flexibly. The sleeping function optimized by the smart air conditioner with wearable devices could reduce the energy consumption up to 46.9% and keep the human health. The presented smart air conditioner could provide a comfortable environment and achieve the goals of energy conservation and environmental protection. PMID:24961213

  2. Smart sensors enable smart air conditioning control.

    PubMed

    Cheng, Chin-Chi; Lee, Dasheng

    2014-01-01

    In this study, mobile phones, wearable devices, temperature and human motion detectors are integrated as smart sensors for enabling smart air conditioning control. Smart sensors obtain feedback, especially occupants' information, from mobile phones and wearable devices placed on human body. The information can be used to adjust air conditioners in advance according to humans' intentions, in so-called intention causing control. Experimental results show that the indoor temperature can be controlled accurately with errors of less than ±0.1 °C. Rapid cool down can be achieved within 2 min to the optimized indoor capacity after occupants enter a room. It's also noted that within two-hour operation the total compressor output of the smart air conditioner is 48.4% less than that of the one using On-Off control. The smart air conditioner with wearable devices could detect the human temperature and activity during sleep to determine the sleeping state and adjusting the sleeping function flexibly. The sleeping function optimized by the smart air conditioner with wearable devices could reduce the energy consumption up to 46.9% and keep the human health. The presented smart air conditioner could provide a comfortable environment and achieve the goals of energy conservation and environmental protection. PMID:24961213

  3. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  4. A two-dimensional deformable phantom for quantitatively verifying deformation algorithms

    SciTech Connect

    Kirby, Neil; Chuang, Cynthia; Pouliot, Jean

    2011-08-15

    Purpose: The incorporation of deformable image registration into the treatment planning process is rapidly advancing. For this reason, the methods used to verify the underlying deformation algorithms must evolve equally fast. This manuscript proposes a two-dimensional deformable phantom, which can objectively verify the accuracy of deformation algorithms, as the next step for improving these techniques. Methods: The phantom represents a single plane of the anatomy for a head and neck patient. Inflation of a balloon catheter inside the phantom simulates tumor growth. CT and camera images of the phantom are acquired before and after its deformation. Nonradiopaque markers reside on the surface of the deformable anatomy and are visible through an acrylic plate, which enables an optical camera to measure their positions; thus, establishing the ground-truth deformation. This measured deformation is directly compared to the predictions of deformation algorithms, using several similarity metrics. The ratio of the number of points with more than a 3 mm deformation error over the number that are deformed by more than 3 mm is used for an error metric to evaluate algorithm accuracy. Results: An optical method of characterizing deformation has been successfully demonstrated. For the tests of this method, the balloon catheter deforms 32 out of the 54 surface markers by more than 3 mm. Different deformation errors result from the different similarity metrics. The most accurate deformation predictions had an error of 75%. Conclusions: The results presented here demonstrate the utility of the phantom for objectively verifying deformation algorithms and determining which is the most accurate. They also indicate that the phantom would benefit from more electron density heterogeneity. The reduction of the deformable anatomy to a two-dimensional system allows for the use of nonradiopaque markers, which do not influence deformation algorithms. This is the fundamental advantage of this

  5. Developments of global greenhouse gas retrieval algorithm using Aerosol information from GOSAT-CAI

    NASA Astrophysics Data System (ADS)

    Kim, Woogyung; kim, Jhoon; Jung, Yeonjin; lee, Hanlim; Boesch, Hartmut

    2014-05-01

    Human activities have resulted in increasing atmospheric CO2 concentration since the beginning of Industrial Revolution to reaching CO2 concentration over 400 ppm at Mauna Loa observatory for the first time. (IPCC, 2007). However, our current knowledge of carbon cycle is still insufficient due to lack of observations. Satellite measurement is one of the most effective approaches to improve the accuracy of carbon source and sink estimates by monitoring the global CO2 distributions with high spatio-temporal resolutions (Rayner and O'Brien, 2001; Houweling et al., 2004). Currently, GOSAT has provided valuable information to observe global CO2 trend, enables our extended understanding of CO2 and preparation for future satellite plan. However, due to its physical limitation, GOSAT CO2 retrieval results have low spatial resolution and cannot cover wide area. Another obstruction of GOSAT CO2 retrieval is low data availability mainly due to contamination by clouds and aerosols. Especially, in East Asia, one of the most important aerosol source areas, it is hard to have successful retrieval result due to high aerosol concentration. The main purpose of this study is to improve data availability of GOSAT CO2 retrieval. In this study, current state of CO2 retrieval algorithm development is introduced and preliminary results are shown. This algorithm is based on optimal estimation method and utilized VLIDORT the vector discrete ordinate radiative transfer model. This proto type algorithm, developed from various combinations of state vectors to find accurate CO2 concentration, shows reasonable result. Especially the aerosol retrieval algorithm using GOSAT-CAI measurements, which provide aerosol information for the same area with GOSAT-FTS measurements, are utilized as input data of CO2 retrieval. Other CO2 retrieval algorithms use chemical transport model result or climatologically expected values as aerosol information which is the main reason of low data availability. With

  6. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  7. QuShape: Rapid, accurate, and best-practices quantification of nucleic acid probing information, resolved by capillary electrophoresis

    PubMed Central

    Karabiber, Fethullah; McGinnis, Jennifer L.; Favorov, Oleg V.; Weeks, Kevin M.

    2013-01-01

    Chemical probing of RNA and DNA structure is a widely used and highly informative approach for examining nucleic acid structure and for evaluating interactions with protein and small-molecule ligands. Use of capillary electrophoresis to analyze chemical probing experiments yields hundreds of nucleotides of information per experiment and can be performed on automated instruments. Extraction of the information from capillary electrophoresis electropherograms is a computationally intensive multistep analytical process, and no current software provides rapid, automated, and accurate data analysis. To overcome this bottleneck, we developed a platform-independent, user-friendly software package, QuShape, that yields quantitatively accurate nucleotide reactivity information with minimal user supervision. QuShape incorporates newly developed algorithms for signal decay correction, alignment of time-varying signals within and across capillaries and relative to the RNA nucleotide sequence, and signal scaling across channels or experiments. An analysis-by-reference option enables multiple, related experiments to be fully analyzed in minutes. We illustrate the usefulness and robustness of QuShape by analysis of RNA SHAPE (selective 2′-hydroxyl acylation analyzed by primer extension) experiments. PMID:23188808

  8. First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.

    2013-08-01

    We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.

  9. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  10. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  11. Enabling technology for human collaboration.

    SciTech Connect

    Murphy, Tim Andrew; Jones, Wendell Bruce; Warner, David Jay; Doser, Adele Beatrice; Johnson, Curtis Martin; Merkle, Peter Benedict

    2003-11-01

    This report summarizes the results of a five-month LDRD late start project which explored the potential of enabling technology to improve the performance of small groups. The purpose was to investigate and develop new methods to assist groups working in high consequence, high stress, ambiguous and time critical situations, especially those for which it is impractical to adequately train or prepare. A testbed was constructed for exploratory analysis of a small group engaged in tasks with high cognitive and communication performance requirements. The system consisted of five computer stations, four with special devices equipped to collect physiologic, somatic, audio and video data. Test subjects were recruited and engaged in a cooperative video game. Each team member was provided with a sensor array for physiologic and somatic data collection while playing the video game. We explored the potential for real-time signal analysis to provide information that enables emergent and desirable group behavior and improved task performance. The data collected in this study included audio, video, game scores, physiological, somatic, keystroke, and mouse movement data. The use of self-organizing maps (SOMs) was explored to search for emergent trends in the physiological data as it correlated with the video, audio and game scores. This exploration resulted in the development of two approaches for analysis, to be used concurrently, an individual SOM and a group SOM. The individual SOM was trained using the unique data of each person, and was used to monitor the effectiveness and stress level of each member of the group. The group SOM was trained using the data of the entire group, and was used to monitor the group effectiveness and dynamics. Results suggested that both types of SOMs were required to adequately track evolutions and shifts in group effectiveness. Four subjects were used in the data collection and development of these tools. This report documents a proof of concept

  12. Enabling Participation In Exoplanet Science

    NASA Astrophysics Data System (ADS)

    Taylor, Stuart F.

    2015-08-01

    Determining the distribution of exoplanets has required the contributions of a community of astronomers, who all require the support of colleagues to finish their projects in a manner to enable them to enter new collaborations to continue to contribute to understanding exoplanet science.The contributions of each member of the astronomy community are to be encouraged and must never be intentionally obstructed.We present a member’s long pursuit to be a contributing part of the exoplanet community through doing transit photometry as a means of commissioning the telescopes for a new observatory, followed by pursuit of interpreting the distributions in exoplanet parameter data.We present how the photometry projects have been presented as successful by the others who have claimed to have completed them, but how by requiring its employees to present results while omitting one member has been obstructive against members working together and has prevented the results from being published in what can genuinely be called a peer-reviewed fashion.We present how by tolerating one group to obstruct one member from finishing participation and then falsely denying credit is counterproductive to doing science.We show how expecting one member to attempt to go around an ostracizing group by starting something different is destructive to the entire profession. We repeat previously published appeals to help ostracized members to “go around the observatory” by calling for discussion on how the community must act to reverse cases of shunning, bullying, and other abuses. Without better recourse and support from the community, actions that do not meet standard good collegial behavior end up forcing good members from the community. The most important actions are to enable an ostracized member to have recourse to participating in group papers by either working through other authors or through the journal. All journals and authors must expect that no co-author is keeping out a major

  13. SPA: A Probabilistic Algorithm for Spliced Alignment

    PubMed Central

    van Nimwegen, Erik; Paul, Nicodeme; Sheridan, Robert; Zavolan, Mihaela

    2006-01-01

    Recent large-scale cDNA sequencing efforts show that elaborate patterns of splice variation are responsible for much of the proteome diversity in higher eukaryotes. To obtain an accurate account of the repertoire of splice variants, and to gain insight into the mechanisms of alternative splicing, it is essential that cDNAs are very accurately mapped to their respective genomes. Currently available algorithms for cDNA-to-genome alignment do not reach the necessary level of accuracy because they use ad hoc scoring models that cannot correctly trade off the likelihoods of various sequencing errors against the probabilities of different gene structures. Here we develop a Bayesian probabilistic approach to cDNA-to-genome alignment. Gene structures are assigned prior probabilities based on the lengths of their introns and exons, and based on the sequences at their splice boundaries. A likelihood model for sequencing errors takes into account the rates at which misincorporation, as well as insertions and deletions of different lengths, occurs during sequencing. The parameters of both the prior and likelihood model can be automatically estimated from a set of cDNAs, thus enabling our method to adapt itself to different organisms and experimental procedures. We implemented our method in a fast cDNA-to-genome alignment program, SPA, and applied it to the FANTOM3 dataset of over 100,000 full-length mouse cDNAs and a dataset of over 20,000 full-length human cDNAs. Comparison with the results of four other mapping programs shows that SPA produces alignments of significantly higher quality. In particular, the quality of the SPA alignments near splice boundaries and SPA's mapping of the 5′ and 3′ ends of the cDNAs are highly improved, allowing for more accurate identification of transcript starts and ends, and accurate identification of subtle splice variations. Finally, our splice boundary analysis on the human dataset suggests the existence of a novel non-canonical splice

  14. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  15. Enabling Pinpoint Landing on Mars

    NASA Technical Reports Server (NTRS)

    Hattis, Phil; George, Sean; Wolf, Aron A.

    2005-01-01

    This slide presentation reviews the entry, descent and landing (EDL) of a pinpoint landing (PPL) on Mars. These PPL missions will be required to deliver about 1000 kg of useful payload to the surface of Mars, therefore soft landings are of primary interest. The landing sites will be in a mid to to high latitude with possible sites about 2.5 km above the martian mean surface altitude. The applicable EDL is described and reviewed in phases. The evaluation approach is reviewed and the requirements for an accurate landing are reviewed. The descent of the unguided aeroshell entry phase dispersion due to trajectory and ballistic coefficient variations are shown in charts. These charts view the dispersions from three entries, Entry from orbit, and two types of direct entry. There is discussion of the differences in steerable subsonic parachute control vs dispersions, and the propulsive phase delta velocity vs dispersions. Data is presented for the three trajectory phases (i.e., Aeroshell, supersonic and subsonic chute) for Direct entry and low orbit entry. The results of the analysis is presented, including possibilities for mitigation of dispersions. The analysis of the navigation error is summarized, and the trajectory biasing for martian winds is assessed.

  16. DMD-enabled confocal microendoscopy

    NASA Astrophysics Data System (ADS)

    Lane, Pierre M.; Dlugan, Andrew L. P.; MacAulay, Calum E.

    2001-05-01

    Conventional endoscopy is limited to imaging macroscopic views of tissue. The British Columbia Cancer Research Center, in collaboration with Digital Optical Imaging Corp., is developing a fiber-bundle based microendoscopy system to enable in vivo confocal imaging of cells and tissue structure through the biopsy channel of an endoscope, hypodermic needle, or catheter. The feasibility of imaging individual cells and tissue architecture will be presented using both reflectance and tissue auto-fluorescence modes of imaging. The system consists of a coherent fiber bundle, low-magnification high-NA objective lens, Digital Micromirror DeviceTM(DMD), light source, and CCD camera. The novel approach is the precise control and manipulation of light flow into and out of individual optical fibers. This control is achieved by employing a DMD to illuminate and detect light from selected fibers such that only the core of each fiber is illuminated or detected. The objective of the research is to develop a low-cost, clinically viable microendoscopy system for a range of detection, diagnostic, localization and differentiation uses associated with cancer and pre-cancerous conditions. Currently, multi-wavelength reflectance confocal images with 1 micrometers lateral resolution and 1.6 micrometers axial resolution have been achieved using a 0.95 mm bundle with 30,000 fibers.

  17. Fast and accurate circle detection using gradient-direction-based segmentation.

    PubMed

    Wu, Jianping; Chen, Ke; Gao, Xiaohui

    2013-06-01

    We present what is to our knowledge the first-ever fitting-based circle detection algorithm, namely, the fast and accurate circle (FACILE) detection algorithm, based on gradient-direction-based edge clustering and direct least square fitting. Edges are segmented into sections based on gradient directions, and each section is validated separately; valid arcs are then fitted and further merged to extract more accurate circle information. We implemented the algorithm with the C++ language and compared it with four other algorithms. Testing on simulated data showed FACILE was far superior to the randomized Hough transform, standard Hough transform, and fast circle detection using gradient pair vectors with regard to processing speed and detection reliability. Testing on publicly available standard datasets showed FACILE outperformed robust and precise circular detection, a state-of-art arc detection method, by 35% with regard to recognition rate and is also a significant improvement over the latter in processing speed. PMID:24323106

  18. Enabling individualized therapy through nanotechnology

    PubMed Central

    Sakamoto, Jason H.; van de Ven, Anne L.; Godin, Biana; Blanco, Elvin; Serda, Rita E.; Grattoni, Alessandro; Ziemys, Arturas; Bouamrani, Ali; Hu, Tony; Ranganathan, Shivakumar I.; De Rosa, Enrica; Martinez, Jonathan O.; Smid, Christine A.; Buchanan, Rachel M.; Lee, Sei-Young; Srinivasan, Srimeenakshi; Landry, Matthew; Meyn, Anne; Tasciotti, Ennio; Liu, Xuewu; Decuzzi, Paolo; Ferrari, Mauro

    2010-01-01

    Individualized medicine is the healthcare strategy that rebukes the idiomatic dogma of ‘losing sight of the forest for the trees’. We are entering a new era of healthcare where it is no longer acceptable to develop and market a drug that is effective for only 80% of the patient population. The emergence of “-omic” technologies (e.g. genomics, transcriptomics, proteomics, metabolomics) and advances in systems biology are magnifying the deficiencies of standardized therapy, which often provide little treatment latitude for accommodating patient physiologic idiosyncrasies. A personalized approach to medicine is not a novel concept. Ever since the scientific community began unraveling the mysteries of the genome, the promise of discarding generic treatment regimens in favor of patient-specific therapies became more feasible and realistic. One of the major scientific impediments of this movement towards personalized medicine has been the need for technological enablement. Nanotechnology is projected to play a critical role in patient-specific therapy; however, this transition will depend heavily upon the evolutionary development of a systems biology approach to clinical medicine based upon “-omic” technology analysis and integration. This manuscript provides a forward looking assessment of the promise of nanomedicine as it pertains to individualized medicine and establishes a technology “snapshot” of the current state of nano-based products over a vast array of clinical indications and range of patient specificity. Other issues such as market driven hurdles and regulatory compliance reform are anticipated to “self-correct” in accordance to scientific advancement and healthcare demand. These peripheral, non-scientific concerns are not addressed at length in this manuscript; however they do exist, and their impact to the paradigm shifting healthcare transformation towards individualized medicine will be critical for its success. PMID:20045055

  19. Solar Glitter -- Microsystems Enabled Photovoltaics

    NASA Astrophysics Data System (ADS)

    Nielson, Gregory N.

    2012-02-01

    Many products have significantly benefitted from, or been enabled by, the ability to manufacture structures at an ever decreasing length scale. Obvious examples of this include integrated circuits, flat panel displays, micro-scale sensors, and LED lighting. These industries have benefited from length scale effects in terms of improved performance, reduced cost, or new functionality (or a combination of these). In a similar manner, we are working to take advantage of length scale effects that exist within solar photovoltaic (PV) systems. While this is a significant step away from traditional approaches to solar power systems, the benefits in terms of new functionality, improved performance, and reduced cost for solar power are compelling. We are exploring scale effects that result from the size of the solar cells within the system. We have developed unique cells of both crystalline silicon and III-V materials that are very thin (5-20 microns thick) and have very small lateral dimensions (on the order of hundreds of microns across). These cells minimize the amount of expensive semiconductor material required for the system, allow improved cell performance, and provide an expanded design space for both module and system concepts allowing optimized power output and reduced module and balance of system costs. Furthermore, the small size of the cells allows for unique high-efficiency, high-flexibility PV panels and new building-integrated PV options that are currently unavailable. These benefits provide a pathway for PV power to become cost competitive with grid power and allow unique power solutions independent of grid power.

  20. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  1. Hydrologic Prediction Through Earthcube Enabled Hydrogeophysical Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Johnson, D.

    2012-12-01

    Accurate prediction of hydrologic processes is contingent on the successful interaction of multiple components, including (1) accurate conceptual and numerical models describing physical, chemical and biological processes (2) a numerical framework for integration of such processes and (3) multidisciplinary temporal data streams which feeds such models. Over the past ten years the main focus in the hydrogeophysical community has been the advancement and developments of conceptual and numerical models. While this advancement still poses numerous challenges (e.g. the in silico modeling of microbiological processes and the coupling of models across different interfaces) there is now a fairly good high level understanding of the types, scales of and interplay between processes. In parallel with this advancement there have been rapid developments in data acquisition capabilities (ranging from satellite based remote sensing to low cost sensor networks) and the associated cyberinfrastructure which allows for mash ups of data from heterogeneous and independent sensor networks. The tools for this in generally have come from outside the hydrogeophysical community - partly these are specific scientific tools developed through NSF, DOE and NASA funding, and partly these are general web2.0 tools or tools developed under commercial initiatives (e.g. the IBM Smarter Planet initiative). One challenge facing the hydrogeophysical community is how to effectively harness all these tools to develop hydrologic prediction tools. One of the primary opportunities for this is the NSF funded EarthCube effort (http://earthcube.ning.com/ ). The goal of EarthCube is to transform the conduct of research by supporting the development of community-guided cyberinfrastructure to integrate data and information for knowledge management across the Geosciences. Note that Earthcube is part of a larger NSF effort (Cyberinfrastructure for the 21st Century (CIF21), and that Earthcube is driven by the vision

  2. A Strapdown Interial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter

    PubMed Central

    Gao, Wei; Zhang, Ya; Wang, Jianguo

    2014-01-01

    The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842

  3. A new algorithm for segmentation of cardiac quiescent phases and cardiac time intervals using seismocardiography

    NASA Astrophysics Data System (ADS)

    Jafari Tadi, Mojtaba; Koivisto, Tero; Pänkäälä, Mikko; Paasio, Ari; Knuutila, Timo; Teräs, Mika; Hänninen, Pekka

    2015-03-01

    Systolic time intervals (STI) have significant diagnostic values for a clinical assessment of the left ventricle in adults. This study was conducted to explore the feasibility of using seismocardiography (SCG) to measure the systolic timings of the cardiac cycle accurately. An algorithm was developed for the automatic localization of the cardiac events (e.g. the opening and closing moments of the aortic and mitral valves). Synchronously acquired SCG and electrocardiography (ECG) enabled an accurate beat to beat estimation of the electromechanical systole (QS2), pre-ejection period (PEP) index and left ventricular ejection time (LVET) index. The performance of the algorithm was evaluated on a healthy test group with no evidence of cardiovascular disease (CVD). STI values were corrected based on Weissler's regression method in order to assess the correlation between the heart rate and STIs. One can see from the results that STIs correlate poorly with the heart rate (HR) on this test group. An algorithm was developed to visualize the quiescent phases of the cardiac cycle. A color map displaying the magnitude of SCG accelerations for multiple heartbeats visualizes the average cardiac motions and thereby helps to identify quiescent phases. High correlation between the heart rate and the duration of the cardiac quiescent phases was observed.

  4. A fast approach for accurate content-adaptive mesh generation.

    PubMed

    Yang, Yongyi; Wernick, Miles N; Brankov, Jovan G

    2003-01-01

    Mesh modeling is an important problem with many applications in image processing. A key issue in mesh modeling is how to generate a mesh structure that well represents an image by adapting to its content. We propose a new approach to mesh generation, which is based on a theoretical result derived on the error bound of a mesh representation. In the proposed method, the classical Floyd-Steinberg error-diffusion algorithm is employed to place mesh nodes in the image domain so that their spatial density varies according to the local image content. Delaunay triangulation is next applied to connect the mesh nodes. The result of this approach is that fine mesh elements are placed automatically in regions of the image containing high-frequency features while coarse mesh elements are used to represent smooth areas. The proposed algorithm is noniterative, fast, and easy to implement. Numerical results demonstrate that, at very low computational cost, the proposed approach can produce mesh representations that are more accurate than those produced by several existing methods. Moreover, it is demonstrated that the proposed algorithm performs well with images of various kinds, even in the presence of noise. PMID:18237961

  5. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  6. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  7. Evolving generalized Voronoi diagrams for accurate cellular image segmentation.

    PubMed

    Yu, Weimiao; Lee, Hwee Kuan; Hariharan, Srivats; Bu, Wenyu; Ahmed, Sohail

    2010-04-01

    Analyzing cellular morphologies on a cell-by-cell basis is vital for drug discovery, cell biology, and many other biological studies. Interactions between cells in their culture environments cause cells to touch each other in acquired microscopy images. Because of this phenomenon, cell segmentation is a challenging task, especially when the cells are of similar brightness and of highly variable shapes. The concept of topological dependence and the maximum common boundary (MCB) algorithm are presented in our previous work (Yu et al., Cytometry Part A 2009;75A:289-297). However, the MCB algorithm suffers a few shortcomings, such as low computational efficiency and difficulties in generalizing to higher dimensions. To overcome these limitations, we present the evolving generalized Voronoi diagram (EGVD) algorithm. Utilizing image intensity and geometric information, EGVD preserves topological dependence easily in both 2D and 3D images, such that touching cells can be segmented satisfactorily. A systematic comparison with other methods demonstrates that EGVD is accurate and much more efficient. PMID:20169588

  8. Quality metric for accurate overlay control in <20nm nodes

    NASA Astrophysics Data System (ADS)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  9. A novel waveband routing algorithm in hierarchical WDM optical networks

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Guo, Xiaojin; Qiu, Shaofeng; Luo, Jiangtao; Zhang, Zhizhong

    2007-11-01

    Hybrid waveband/wavelength switching in intelligent optical networks is gaining more and more academic attention. It is very challenging to develop efficient algorithms to efficiently use waveband switching capability. In this paper, we propose a novel cross-layer routing algorithm, waveband layered graph routing algorithm (WBLGR), in waveband switching-enabled optical networks. Through extensive simulation WBLGR algorithm can significantly improve the performance in terms of reduced call blocking probability.

  10. Petascale Computing Enabling Technologies Project Final Report

    SciTech Connect

    de Supinski, B R

    2010-02-14

    The Petascale Computing Enabling Technologies (PCET) project addressed challenges arising from current trends in computer architecture that will lead to large-scale systems with many more nodes, each of which uses multicore chips. These factors will soon lead to systems that have over one million processors. Also, the use of multicore chips will lead to less memory and less memory bandwidth per core. We need fundamentally new algorithmic approaches to cope with these memory constraints and the huge number of processors. Further, correct, efficient code development is difficult even with the number of processors in current systems; more processors will only make it harder. The goal of PCET was to overcome these challenges by developing the computer science and mathematical underpinnings needed to realize the full potential of our future large-scale systems. Our research results will significantly increase the scientific output obtained from LLNL large-scale computing resources by improving application scientist productivity and system utilization. Our successes include scalable mathematical algorithms that adapt to these emerging architecture trends and through code correctness and performance methodologies that automate critical aspects of application development as well as the foundations for application-level fault tolerance techniques. PCET's scope encompassed several research thrusts in computer science and mathematics: code correctness and performance methodologies, scalable mathematics algorithms appropriate for multicore systems, and application-level fault tolerance techniques. Due to funding limitations, we focused primarily on the first three thrusts although our work also lays the foundation for the needed advances in fault tolerance. In the area of scalable mathematics algorithms, our preliminary work established that OpenMP performance of the AMG linear solver benchmark and important individual kernels on Atlas did not match the predictions of our

  11. Fast and Accurate Discovery of Degenerate Linear Motifs in Protein Sequences

    PubMed Central

    Levy, Emmanuel D.; Michnick, Stephen W.

    2014-01-01

    Linear motifs mediate a wide variety of cellular functions, which makes their characterization in protein sequences crucial to understanding cellular systems. However, the short length and degenerate nature of linear motifs make their discovery a difficult problem. Here, we introduce MotifHound, an algorithm particularly suited for the discovery of small and degenerate linear motifs. MotifHound performs an exact and exhaustive enumeration of all motifs present in proteins of interest, including all of their degenerate forms, and scores the overrepresentation of each motif based on its occurrence in proteins of interest relative to a background (e.g., proteome) using the hypergeometric distribution. To assess MotifHound, we benchmarked it together with state-of-the-art algorithms. The benchmark consists of 11,880 sets of proteins from S. cerevisiae; in each set, we artificially spiked-in one motif varying in terms of three key parameters, (i) number of occurrences, (ii) length and (iii) the number of degenerate or “wildcard” positions. The benchmark enabled the evaluation of the impact of these three properties on the performance of the different algorithms. The results showed that MotifHound and SLiMFinder were the most accurate in detecting degenerate linear motifs. Interestingly, MotifHound was 15 to 20 times faster at comparable accuracy and performed best in the discovery of highly degenerate motifs. We complemented the benchmark by an analysis of proteins experimentally shown to bind the FUS1 SH3 domain from S. cerevisiae. Using the full-length protein partners as sole information, MotifHound recapitulated most experimentally determined motifs binding to the FUS1 SH3 domain. Moreover, these motifs exhibited properties typical of SH3 binding peptides, e.g., high intrinsic disorder and evolutionary conservation, despite the fact that none of these properties were used as prior information. MotifHound is available (http://michnick.bcm.umontreal.ca or http

  12. NPP/NPOESS Tools for Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Route, G.; Grant, K. D.; Hughes, R.

    2010-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS Preparatory Project (NPP) and NPOESS satellites will carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes both NPP and NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. The Northrop Grumman Aerospace Systems (NGAS) Algorithms and Data Products (A&DP) organization is responsible for the algorithms that produce the Environmental Data Records (EDRs), including their quality aspects. As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent NPOESS launches, rapid algorithm updates may be required. Raytheon and Northrop Grumman have developed tools and processes to enable changes to be evaluated, tested, and moved into the operational baseline in a rapid and efficient manner. This presentation will provide an overview of the tools available to the Cal/Val teams to ensure rapid and accurate assessment of algorithm changes, along with the processes in place to ensure baseline integrity.

  13. Ancestral genome inference using a genetic algorithm approach.

    PubMed

    Gao, Nan; Yang, Ning; Tang, Jijun

    2013-01-01

    Recent advancement of technologies has now made it routine to obtain and compare gene orders within genomes. Rearrangements of gene orders by operations such as reversal and transposition are rare events that enable researchers to reconstruct deep evolutionary histories. An important application of genome rearrangement analysis is to infer gene orders of ancestral genomes, which is valuable for identifying patterns of evolution and for modeling the evolutionary processes. Among various available methods, parsimony-based methods (including GRAPPA and MGR) are the most widely used. Since the core algorithms of these methods are solvers for the so called median problem, providing efficient and accurate median solver has attracted lots of attention in this field. The "double-cut-and-join" (DCJ) model uses the single DCJ operation to account for all genome rearrangement events. Because mathematically it is much simpler than handling events directly, parsimony methods using DCJ median solvers has better speed and accuracy. However, the DCJ median problem is NP-hard and although several exact algorithms are available, they all have great difficulties when given genomes are distant. In this paper, we present a new algorithm that combines genetic algorithm (GA) with genomic sorting to produce a new method which can solve the DCJ median problem in limited time and space, especially in large and distant datasets. Our experimental results show that this new GA-based method can find optimal or near optimal results for problems ranging from easy to very difficult. Compared to existing parsimony methods which may severely underestimate the true number of evolutionary events, the sorting-based approach can infer ancestral genomes which are much closer to their true ancestors. The code is available at http://phylo.cse.sc.edu. PMID:23658708

  14. The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm

    NASA Technical Reports Server (NTRS)

    Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.

    2013-01-01

    This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.

  15. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  16. A Hybrid-Cloud Science Data System Enabling Advanced Rapid Imaging & Analysis for Monitoring Hazards

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Moore, A. W.; Fielding, E. J.; Radulescu, C.; Sacco, G.; Stough, T. M.; Mattmann, C. A.; Cervelli, P. F.; Poland, M. P.; Cruz, J.

    2012-12-01

    Volcanic eruptions, landslides, and levee failures are some examples of hazards that can be more accurately forecasted with sufficient monitoring of precursory ground deformation, such as the high-resolution measurements from GPS and InSAR. In addition, coherence and reflectivity change maps can be used to detect surface change due to lava flows, mudslides, tornadoes, floods, and other natural and man-made disasters. However, it is difficult for many volcano observatories and other monitoring agencies to process GPS and InSAR products in an automated scenario needed for continual monitoring of events. Additionally, numerous interoperability barriers exist in multi-sensor observation data access, preparation, and fusion to create actionable products. Combining high spatial resolution InSAR products with high temporal resolution GPS products--and automating this data preparation & processing across global-scale areas of interests--present an untapped science and monitoring opportunity. The global coverage offered by satellite-based SAR observations, and the rapidly expanding GPS networks, can provide orders of magnitude more data on these hazardous events if we have a data system that can efficiently and effectively analyze the voluminous raw data, and provide users the tools to access data from their regions of interest. Currently, combined GPS & InSAR time series are primarily generated for specific research applications, and are not implemented to run on large-scale continuous data sets and delivered to decision-making communities. We are developing an advanced service-oriented architecture for hazard monitoring leveraging NASA-funded algorithms and data management to enable both science and decision-making communities to monitor areas of interests via seamless data preparation, processing, and distribution. Our objectives: * Enable high-volume and low-latency automatic generation of NASA Solid Earth science data products (InSAR and GPS) to support hazards

  17. Spectral Representations of Uncertainty: Algorithms and Applications

    SciTech Connect

    George Em Karniadakis

    2005-04-24

    The objectives of this project were: (1) Develop a general algorithmic framework for stochastic ordinary and partial differential equations. (2) Set polynomial chaos method and its generalization on firm theoretical ground. (3) Quantify uncertainty in large-scale simulations involving CFD, MHD and microflows. The overall goal of this project was to provide DOE with an algorithmic capability that is more accurate and three to five orders of magnitude more efficient than the Monte Carlo simulation.

  18. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  19. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  20. Two Types of Bureaucracy: Enabling and Coercive.

    ERIC Educational Resources Information Center

    Adler, Paul S.; Borys, Bryan

    1996-01-01

    Proposes a conceptualization of workflow formalization that helps reconcile contrasting assessments of bureaucracy as alienating or enabling to employees. Uses research on equipment technology design to identify enabling and coercive types of formalization. Identifies some forces tending to discourage an enabling orientation and some persistent…

  1. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  2. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  3. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  4. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  5. Anatomical Brain Images Alone Can Accurately Diagnose Chronic Neuropsychiatric Illnesses

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S.

    2012-01-01

    Objective Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. Methods We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. Results In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Conclusions Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these

  6. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  7. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  8. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  9. An accurately preorganized IRES RNA structure enables eIF4G capture for initiation of viral translation.

    PubMed

    Imai, Shunsuke; Kumar, Parimal; Hellen, Christopher U T; D'Souza, Victoria M; Wagner, Gerhard

    2016-09-01

    Many viruses bypass canonical cap-dependent translation in host cells by using internal ribosomal entry sites (IRESs) in their transcripts; IRESs hijack initiation factors for the assembly of initiation complexes. However, it is currently unknown how IRES RNAs recognize initiation factors that have no endogenous RNA binding partners; in a prominent example, the IRES of encephalomyocarditis virus (EMCV) interacts with the HEAT-1 domain of eukaryotic initiation factor 4G (eIF4G). Here we report the solution structure of the J-K region of this IRES and show that its stems are precisely organized to position protein-recognition bulges. This multisite interaction mechanism operates on an all-or-nothing principle in which all domains are required. This preorganization is accomplished by an 'adjuster module': a pentaloop motif that acts as a dual-sided docking station for base-pair receptors. Because subtle changes in the orientation abrogate protein capture, our study highlights how a viral RNA acquires affinity for a target protein. PMID:27525590

  10. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  11. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  12. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  13. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  14. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    from the conclusion. These algorithms allow one to reason accurately with uncertain data. The above environment can replicate state-f-the-art expert system environments which provides a continuity between the current expert systems which cannot be validated or verified and future expert systems which should be both validated and verified

  15. SU-E-J-97: Quality Assurance of Deformable Image Registration Algorithms: How Realistic Should Phantoms Be?

    SciTech Connect

    Saenz, D; Stathakis, S; Kirby, N; Kim, H; Chen, J

    2015-06-15

    Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realistic noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.

  16. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations. PMID:27499026

  17. Informatics Methods to Enable Sharing of Quantitative Imaging Research Data

    PubMed Central

    Levy, Mia A.; Freymann, John B.; Kirby, Justin S.; Fedorov, Andriy; Fennessy, Fiona M.; Eschrich, Steven A.; Berglund, Anders E.; Fenstermacher, David A.; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L.; Brown, Bartley J.; Braun, Terry A.; Dekker, Andre; Roelofs, Erik; Mountz, James M.; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-01-01

    Introduction The National Cancer Institute (NCI) Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. Methods We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. Results There area variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. Conclusions As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. PMID:22770688

  18. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records

    PubMed Central

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-01-01

    respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). Conclusions A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. PMID:26297364

  19. A Stochastic Algorithm for Generating Realistic Virtual Interstitial Cell of Cajal Networks.

    PubMed

    Gao, Jerry; Sathar, Shameer; O'Grady, Gregory; Archer, Rosalind; Cheng, Leo K

    2015-08-01

    Interstitial cells of Cajal (ICC) play a central role in coordinating normal gastrointestinal (GI) motility. Depletion of ICC numbers and network integrity contributes to major functional GI motility disorders. However, the mechanisms relating ICC structure to GI function and dysfunction remains unclear, partly because there is a lack of large-scale ICC network imaging data across a spectrum of depletion levels to guide models. Experimental imaging of these large-scale networks remains challenging because of technical constraints, and hence, we propose the generation of realistic virtual ICC networks in silico using the single normal equation simulation (SNESIM) algorithm. ICC network imaging data obtained from wild-type (normal) and 5-HT2B serotonin receptor knockout (depleted ICC) mice were used to inform the algorithm, and the virtual networks generated were assessed using ICC network structural metrics and biophysically-based computational modeling. When the virtual networks were compared to the original networks, there was less than 10% error for four out of five structural metrics and all four functional measures. The SNESIM algorithm was then modified to enable the generation of ICC networks across a spectrum of depletion levels, and as a proof-of-concept, virtual networks were successfully generated with a range of structural and functional properties. The SNESIM and modified SNESIM algorithms, therefore, offer an alternative strategy for obtaining the large-scale ICC network imaging data across a spectrum of depletion levels. These models can be applied to accurately inform the physiological consequences of ICC depletion. PMID:25781477

  20. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  1. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  2. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  3. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  4. A parallel algorithm for mesh smoothing

    SciTech Connect

    Freitag, L.; Jones, M.; Plassmann, P.

    1999-07-01

    Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.

  5. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  6. Accurate and efficient spin integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  7. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach. PMID:19163797

  8. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  9. Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography

    PubMed Central

    Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.

    2008-01-01

    Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703

  10. Generating anatomically accurate finite element meshes for electrical impedance tomography of the human head

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Xu, Canhua; Dai, Meng; Fu, Feng; Dong, Xiuzhen

    2013-07-01

    For electrical impedance tomography (EIT) of brain, the use of anatomically accurate and patient-specific finite element (FE) mesh has been shown to confer significant improvements in the quality of image reconstruction. But, given the lack of a rapid method to achieve the accurate anatomic geometry of the head, the generation of patient-specifc mesh is time-comsuming. In this paper, a modified fuzzy c-means algorithm based on non-local means method is performed to implement the segmentation of different layers in the head based on head CT images. This algorithm showed a better effect, especially an accurate recognition of the ventricles and a suitable performance dealing with noise. And the FE mesh established according to the segmentation results is validated in computational simulation. So a rapid practicable method can be provided for the generation of patient-specific FE mesh of the human head that is suitable for brain EIT.

  11. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    USGS Publications Warehouse

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2013-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  12. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  13. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  14. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  15. An Internet enabled impact limiter material database

    SciTech Connect

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-09-01

    This paper presents a detailed explanation of the construction of an interest enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The technique used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience.

  16. Fast and accurate database searches with MS-GF+Percolator.

    PubMed

    Granholm, Viktor; Kim, Sangtae; Navarro, José C F; Sjölund, Erik; Smith, Richard D; Käll, Lukas

    2014-02-01

    One can interpret fragmentation spectra stemming from peptides in mass-spectrometry-based proteomics experiments using so-called database search engines. Frequently, one also runs post-processors such as Percolator to assess the confidence, infer unique peptides, and increase the number of identifications. A recent search engine, MS-GF+, has shown promising results, due to a new and efficient scoring algorithm. However, MS-GF+ provides few statistical estimates about the peptide-spectrum matches, hence limiting the biological interpretation. Here, we enabled Percolator processing for MS-GF+ output and observed an increased number of identified peptides for a wide variety of data sets. In addition, Percolator directly reports p values and false discovery rate estimates, such as q values and posterior error probabilities, for peptide-spectrum matches, peptides, and proteins, functions that are useful for the whole proteomics community. PMID:24344789

  17. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  18. Bright-field quantitative phase microscopy (BFQPM) for accurate phase imaging using conventional microscopy hardware

    NASA Astrophysics Data System (ADS)

    Jenkins, Micah; Gaylord, Thomas K.

    2015-03-01

    Most quantitative phase microscopy methods require the use of custom-built or modified microscopic configurations which are not typically available to most bio/pathologists. There are, however, phase retrieval algorithms which utilize defocused bright-field images as input data and are therefore implementable in existing laboratory environments. Among these, deterministic methods such as those based on inverting the transport-of-intensity equation (TIE) or a phase contrast transfer function (PCTF) are particularly attractive due to their compatibility with Köhler illuminated systems and numerical simplicity. Recently, a new method has been proposed, called multi-filter phase imaging with partially coherent light (MFPI-PC), which alleviates the inherent noise/resolution trade-off in solving the TIE by utilizing a large number of defocused bright-field images spaced equally about the focal plane. Despite greatly improving the state-ofthe- art, the method has many shortcomings including the impracticality of high-speed acquisition, inefficient sampling, and attenuated response at high frequencies due to aperture effects. In this report, we present a new method, called bright-field quantitative phase microscopy (BFQPM), which efficiently utilizes a small number of defocused bright-field images and recovers frequencies out to the partially coherent diffraction limit. The method is based on a noiseminimized inversion of a PCTF derived for each finite defocus distance. We present simulation results which indicate nanoscale optical path length sensitivity and improved performance over MFPI-PC. We also provide experimental results imaging live bovine mesenchymal stem cells at sub-second temporal resolution. In all, BFQPM enables fast and accurate phase imaging with unprecedented spatial resolution using widely available bright-field microscopy hardware.

  19. Methodology for Formulating Diesel Surrogate Fuels with Accurate Compositional, Ignition-Quality, and Volatility Characteristics

    SciTech Connect

    Mueller, C. J.; Cannella, W. J.; Bruno, T. J.; Bunting, B.; Dettman, H. D.; Franz, J. A.; Huber, M. L.; Natarajan, M.; Pitz, W. J.; Ratcliff, M. A.; Wright, K.

    2012-06-21

    In this study, a novel approach was developed to formulate surrogate fuels having characteristics that are representative of diesel fuels produced from real-world refinery streams. Because diesel fuels typically consist of hundreds of compounds, it is difficult to conclusively determine the effects of fuel composition on combustion properties. Surrogate fuels, being simpler representations of these practical fuels, are of interest because they can provide a better understanding of fundamental fuel-composition and property effects on combustion and emissions-formation processes in internal-combustion engines. In addition, the application of surrogate fuels in numerical simulations with accurate vaporization, mixing, and combustion models could revolutionize future engine designs by enabling computational optimization for evolving real fuels. Dependable computational design would not only improve engine function, it would do so at significant cost savings relative to current optimization strategies that rely on physical testing of hardware prototypes. The approach in this study utilized the state-of-the-art techniques of {sup 13}C and {sup 1}H nuclear magnetic resonance spectroscopy and the advanced distillation curve to characterize fuel composition and volatility, respectively. The ignition quality was quantified by the derived cetane number. Two well-characterized, ultra-low-sulfur No.2 diesel reference fuels produced from refinery streams were used as target fuels: a 2007 emissions certification fuel and a Coordinating Research Council (CRC) Fuels for Advanced Combustion Engines (FACE) diesel fuel. A surrogate was created for each target fuel by blending eight pure compounds. The known carbon bond types within the pure compounds, as well as models for the ignition qualities and volatilities of their mixtures, were used in a multiproperty regression algorithm to determine optimal surrogate formulations. The predicted and measured surrogate-fuel properties were

  20. Resource optimization scheme for multimedia-enabled wireless mesh networks.

    PubMed

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young

    2014-01-01

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241

  1. Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks

    PubMed Central

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young

    2014-01-01

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241

  2. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  3. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  4. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  5. Accurate multiplex gene synthesis from programmable DNA microchips

    NASA Astrophysics Data System (ADS)

    Tian, Jingdong; Gong, Hui; Sheng, Nijing; Zhou, Xiaochuan; Gulari, Erdogan; Gao, Xiaolian; Church, George

    2004-12-01

    Testing the many hypotheses from genomics and systems biology experiments demands accurate and cost-effective gene and genome synthesis. Here we describe a microchip-based technology for multiplex gene synthesis. Pools of thousands of `construction' oligonucleotides and tagged complementary `selection' oligonucleotides are synthesized on photo-programmable microfluidic chips, released, amplified and selected by hybridization to reduce synthesis errors ninefold. A one-step polymerase assembly multiplexing reaction assembles these into multiple genes. This technology enabled us to synthesize all 21 genes that encode the proteins of the Escherichia coli 30S ribosomal subunit, and to optimize their translation efficiency in vitro through alteration of codon bias. This is a significant step towards the synthesis of ribosomes in vitro and should have utility for synthetic biology in general.

  6. Accurate reactions open up the way for more cooperative societies

    NASA Astrophysics Data System (ADS)

    Vukov, Jeromos

    2014-09-01

    We consider a prisoner's dilemma model where the interaction neighborhood is defined by a square lattice. Players are equipped with basic cognitive abilities such as being able to distinguish their partners, remember their actions, and react to their strategy. By means of their short-term memory, they can remember not only the last action of their partner but the way they reacted to it themselves. This additional accuracy in the memory enables the handling of different interaction patterns in a more appropriate way and this results in a cooperative community with a strikingly high cooperation level for any temptation value. However, the more developed cognitive abilities can only be effective if the copying process of the strategies is accurate enough. The excessive extent of faulty decisions can deal a fatal blow to the possibility of stable cooperative relations.

  7. Toward accurate and fast iris segmentation for iris biometrics.

    PubMed

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed. PMID:19574626

  8. Accurate and efficient linear scaling DFT calculations with universal applicability.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Genovese, Luigi; Caliste, Damien; Boulanger, Paul; Goedecker, Stefan; Deutsch, Thierry

    2015-12-21

    Density functional theory calculations are computationally extremely expensive for systems containing many atoms due to their intrinsic cubic scaling. This fact has led to the development of so-called linear scaling algorithms during the last few decades. In this way it becomes possible to perform ab initio calculations for several tens of thousands of atoms within reasonable walltimes. However, even though the use of linear scaling algorithms is physically well justified, their implementation often introduces some small errors. Consequently most implementations offering such a linear complexity either yield only a limited accuracy or, if one wants to go beyond this restriction, require a tedious fine tuning of many parameters. In our linear scaling approach within the BigDFT package, we were able to overcome this restriction. Using an ansatz based on localized support functions expressed in an underlying Daubechies wavelet basis - which offers ideal properties for accurate linear scaling calculations - we obtain an amazingly high accuracy and a universal applicability while still keeping the possibility of simulating large system with linear scaling walltimes requiring only a moderate demand of computing resources. We prove the effectiveness of our method on a wide variety of systems with different boundary conditions, for single-point calculations as well as for geometry optimizations and molecular dynamics. PMID:25958954

  9. Accurate Langevin approaches to simulate Markovian channel dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2015-12-01

    The stochasticity of ion-channels dynamic is significant for physiological processes on neuronal cell membranes. Microscopic simulations of the ion-channel gating with Markov chains can be considered to be an accurate standard. However, such Markovian simulations are computationally demanding for membrane areas of physiologically relevant sizes, which makes the noise-approximating or Langevin equation methods advantageous in many cases. In this review, we discuss the Langevin-like approaches, including the channel-based and simplified subunit-based stochastic differential equations proposed by Fox and Lu, and the effective Langevin approaches in which colored noise is added to deterministic differential equations. In the framework of Fox and Lu’s classical models, several variants of numerical algorithms, which have been recently developed to improve accuracy as well as efficiency, are also discussed. Through the comparison of different simulation algorithms of ion-channel noise with the standard Markovian simulation, we aim to reveal the extent to which the existing Langevin-like methods approximate results using Markovian methods. Open questions for future studies are also discussed.

  10. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGESBeta

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  11. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  12. Automatically Generated, Anatomically Accurate Meshes for Cardiac Electrophysiology Problems

    PubMed Central

    Prassl, Anton J.; Kickinger, Ferdinand; Ahammer, Helmut; Grau, Vicente; Schneider, Jürgen E.; Hofer, Ernst; Vigmond, Edward J.; Trayanova, Natalia A.

    2010-01-01

    Significant advancements in imaging technology and the dramatic increase in computer power over the last few years broke the ground for the construction of anatomically realistic models of the heart at an unprecedented level of detail. To effectively make use of high-resolution imaging datasets for modeling purposes, the imaged objects have to be discretized. This procedure is trivial for structured grids. However, to develop generally applicable heart models, unstructured grids are much preferable. In this study, a novel image-based unstructured mesh generation technique is proposed. It uses the dual mesh of an octree applied directly to segmented 3-D image stacks. The method produces conformal, boundary-fitted, and hexahedra-dominant meshes. The algorithm operates fully automatically with no requirements for interactivity and generates accurate volume-preserving representations of arbitrarily complex geometries with smooth surfaces. The method is very well suited for cardiac electrophysiological simulations. In the myocardium, the algorithm minimizes variations in element size, whereas in the surrounding medium, the element size is grown larger with the distance to the myocardial surfaces to reduce the computational burden. The numerical feasibility of the approach is demonstrated by discretizing and solving the monodomain and bidomain equations on the generated grids for two preparations of high experimental relevance, a left ventricular wedge preparation, and a papillary muscle. PMID:19203877

  13. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  14. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  15. Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.

    PubMed

    Gordon, H R; Castaño, D J

    1987-06-01

    An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined

  16. Accurate camera calibration method specialized for virtual studios

    NASA Astrophysics Data System (ADS)

    Okubo, Hidehiko; Yamanouchi, Yuko; Mitsumine, Hideki; Fukaya, Takashi; Inoue, Seiki

    2008-02-01

    Virtual studio is a popular technology for TV programs, that makes possible to synchronize computer graphics (CG) to realshot image in camera motion. Normally, the geometrical matching accuracy between CG and realshot image is not expected so much on real-time system, we sometimes compromise on directions, not to come out the problem. So we developed the hybrid camera calibration method and CG generating system to achieve the accurate geometrical matching of CG and realshot on virtual studio. Our calibration method is intended for the camera system on platform and tripod with rotary encoder, that can measure pan/tilt angles. To solve the camera model and initial pose, we enhanced the bundle adjustment algorithm to fit the camera model, using pan/tilt data as known parameters, and optimizing all other parameters invariant against pan/tilt value. This initialization yields high accurate camera position and orientation consistent with any pan/tilt values. Also we created CG generator implemented the lens distortion function with GPU programming. By applying the lens distortion parameters obtained by camera calibration process, we could get fair compositing results.

  17. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  18. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  19. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  20. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  1. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  2. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  3. Accurate Determination of Membrane Dynamics with Line-Scan FCS

    PubMed Central

    Ries, Jonas; Chiantia, Salvatore; Schwille, Petra

    2009-01-01

    Here we present an efficient implementation of line-scan fluorescence correlation spectroscopy (i.e., one-dimensional spatio-temporal image correlation spectroscopy) using a commercial laser scanning microscope, which allows the accurate measurement of diffusion coefficients and concentrations in biological lipid membranes within seconds. Line-scan fluorescence correlation spectroscopy is a calibration-free technique. Therefore, it is insensitive to optical artifacts, saturation, or incorrect positioning of the laser focus. In addition, it is virtually unaffected by photobleaching. Correction schemes for residual inhomogeneities and depletion of fluorophores due to photobleaching extend the applicability of line-scan fluorescence correlation spectroscopy to more demanding systems. This technique enabled us to measure accurate diffusion coefficients and partition coefficients of fluorescent lipids in phase-separating supported bilayers of three commonly used raft-mimicking compositions. Furthermore, we probed the temperature dependence of the diffusion coefficient in several model membranes, and in human embryonic kidney cell membranes not affected by temperature-induced optical aberrations. PMID:19254560

  4. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  5. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  6. Accurate and Precise Zinc Isotope Ratio Measurements in Urban Aerosols

    NASA Astrophysics Data System (ADS)

    Weiss, D.; Gioia, S. M. C. L.; Coles, B.; Arnold, T.; Babinski, M.

    2009-04-01

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of δ66Zn determinations in aerosols is around 0.05 per mil per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in δ66Zn ranging between -0.96 and -0.37 per mil in coarse and between -1.04 and 0.02 per mil in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source.

  7. More-Accurate Model of Flows in Rocket Injectors

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  8. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  9. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  10. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  11. The importance of accurate convergence in addressing stereoscopic visual fatigue

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.

    2015-03-01

    Visual fatigue (asthenopia) continues to be a problem in extended viewing of stereoscopic imagery. Poorly converged imagery may contribute to this problem. In 2013, the Author reported that in a study sample a surprisingly high number of 3D feature films released as stereoscopic Blu-rays contained obvious convergence errors.1 The placement of stereoscopic image convergence can be an "artistic" call, but upon close examination, the sampled films seemed to have simply missed their intended convergence location. This failure maybe because some stereoscopic editing tools do not have the necessary fidelity to enable a 3D editor to obtain a high degree of image alignment or set an exact point of convergence. Compounding this matter further is the fact that a large number of stereoscopic editors may not believe that pixel accurate alignment and convergence is necessary. The Author asserts that setting a pixel accurate point of convergence on an object at the start of any given stereoscopic scene will improve the viewer's ability to fuse the left and right images quickly. The premise is that stereoscopic performance (acuity) increases when an accurately converged object is available in the image for the viewer to fuse immediately. Furthermore, this increased viewer stereoscopic performance should reduce the amount of visual fatigue associated with longer-term viewing because less mental effort will be required to perceive the imagery. To test this concept, we developed special stereoscopic imagery to measure viewer visual performance with and without specific objects for convergence. The Company Team conducted a series of visual tests with 24 participants between 25 and 60 years of age. This paper reports the results of these tests.

  12. Enabling Disabled Persons to Gain Access to Digital Media

    NASA Technical Reports Server (NTRS)

    Beach, Glenn; OGrady, Ryan

    2011-01-01

    A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.

  13. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  14. Accurate restoration of DNA sequences. Progress report

    SciTech Connect

    Churchill, G.A.

    1994-05-01

    The primary of this project are the development of (1) a general stochastic model for DNA sequencing errors (2) algorithms to restore the original DNA sequence and (3) statistical methods to assess the accuracy of this restoration. A secondary objective is to develop new algorithms for fragment assembly. Initially a stochastic model that assumes errors are independent and uniformly distributed will be developed. Generalizations of the basic model will be developed to account for (1) decay of accuracy along fragments, (2) variable error rates among fragments, (3) sequence dependent errors (e.g. homopolymeric, runs), and (4) strand--specific systematic errors (e.g. compressions). The emphasis of this project will be the development of a theoretical basis for determining sequence accuracy. However, new algorithms are proposed and these will be implemented as software (in the C programming language). This software will be tested using real and simulated data. It will be modular in design and will be made available for distribution to the scientific community.

  15. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    NASA Astrophysics Data System (ADS)

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  16. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  17. AN ACCURATE ALGORITHM FOR NONUNIFORM FAST FOURIER TRANSFORMS (NUFFT). (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  18. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    PubMed

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS. PMID:25610795

  19. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis

    PubMed Central

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS. PMID:25610795

  20. Diagnostic medicine: A comprehensive ABCDE algorithm for accurate interpretation of radiology and pathology images and data.

    PubMed

    Zioga, Christina A; Destouni, Chariklia T

    2015-01-01

    A pathway to the procedure of interpreting radiology images or pathology slides is presented. This simplified mnemonic can be used as a memory aid determining the order in which diagnosis should be approached. First, before we place the radiology image in front of the lightbox or the slide under the microscope we have to be sure that it is adequately labelled and prepared (Correct). It is also necessary to have or gather all available information concerning the patient and if possible his full medical history (A, Available Information). Once we come across the image, two fundamental questions should be answered: which part of the body does the image concern and-where applicable-if the image is adequate (B, Body). Next, we proceed to answer if we have a neoplastic tissue or not (C, Cancer). We then either form a differential diagnosis list or we reach to a final diagnosis (D, Diagnosis), which is followed by the writing of the report (E, Exhibit). These series of steps followed as an ad hoc procedure by most specialists, are important in order to achieve a complete and clear diagnosis and report, which is intended to support optimal clinical practice. This ABCDE concept is a generic standard approach which is not limited to specific specimens and can lead to faster diagnosis with less mistakes. PMID:26665217

  1. Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images.

    PubMed

    Vandeghinste, Bert; Van Holen, Roel; Vanhove, Christian; De Vos, Filip; Vandenberghe, Stefaan; Staelens, Steven

    2014-01-01

    This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correction, (2) computed tomography-based attenuation correction, (3) resolution recovery, and (4) edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA) phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n  =  10 kidneys) and [111In]octreotide in mice (n  =  6) inoculated with a xenograft neuroendocrine tumor (NCI-H727). The measured energy resolution is 5.3% for 140.51 keV (99mTc), 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In). For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with 99mTc-dimercaptosuccinic acid ([99mTc]DMSA) is reduced from 16.2 ± 2.8% to -0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [99mTc]DMSA and [111In]octreotide. PMID:24824961

  2. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  3. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  4. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  5. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  6. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  7. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  8. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  9. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  10. Electronic Health Record Application Support Service Enablers.

    PubMed

    Neofytou, M S; Neokleous, K; Aristodemou, A; Constantinou, I; Antoniou, Z; Schiza, E C; Pattichis, C S; Schizas, C N

    2015-08-01

    There is a huge need for open source software solutions in the healthcare domain, given the flexibility, interoperability and resource savings characteristics they offer. In this context, this paper presents the development of three open source libraries - Specific Enablers (SEs) for eHealth applications that were developed under the European project titled "Future Internet Social and Technological Alignment Research" (FI-STAR) funded under the "Future Internet Public Private Partnership" (FI-PPP) program. The three SEs developed under the Electronic Health Record Application Support Service Enablers (EHR-EN) correspond to: a) an Electronic Health Record enabler (EHR SE), b) a patient summary enabler based on the EU project "European patient Summary Open Source services" (epSOS SE) supporting patient mobility and the offering of interoperable services, and c) a Picture Archiving and Communications System (PACS) enabler (PACS SE) based on the dcm4che open source system for the support of medical imaging functionality. The EHR SE follows the HL7 Clinical Document Architecture (CDA) V2.0 and supports the Integrating the Healthcare Enterprise (IHE) profiles (recently awarded in Connectathon 2015). These three FI-STAR platform enablers are designed to facilitate the deployment of innovative applications and value added services in the health care sector. They can be downloaded from the FI-STAR cataloque website. Work in progress focuses in the validation and evaluation scenarios for the proving and demonstration of the usability, applicability and adaptability of the proposed enablers. PMID:26736531

  11. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  12. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  13. A Cooperative Co-Evolutionary Genetic Algorithm for Tree Scoring and Ancestral Genome Inference.

    PubMed

    Gao, Nan; Zhang, Yan; Feng, Bing; Tang, Jijun

    2015-01-01

    Recent advances of technology have made it easy to obtain and compare whole genomes. Rearrangements of genomes through operations such as reversals and transpositions are rare events that enable researchers to reconstruct deep evolutionary history among species. Some of the popular methods need to search a large tree space for the best scored tree, thus it is desirable to have a fast and accurate method that can score a given tree efficiently. During the tree scoring procedure, the genomic structures of internal tree nodes are also provided, which provide important information for inferring ancestral genomes and for modeling the evolutionary processes. However, computing tree scores and ancestral genomes are very difficult and a lot of researchers have to rely on heuristic methods which have various disadvantages. In this paper, we describe the first genetic algorithm for tree scoring and ancestor inference, which uses a fitness function considering co-evolution, adopts different initial seeding methods to initialize the first population pool, and utilizes a sorting-based approach to realize evolution. Our extensive experiments show that compared with other existing algorithms, this new method is more accurate and can infer ancestral genomes that are much closer to the true ancestors. PMID:26671797

  14. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  15. Development and evaluation of an articulated registration algorithm for human skeleton registration

    NASA Astrophysics Data System (ADS)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  16. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    SciTech Connect

    Lin, Yuan Samei, Ehsan

    2014-02-15

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  17. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  18. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  19. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  20. Utility Energy Services Contracts: Enabling Documents

    SciTech Connect

    2009-05-01

    Utility Energy Services Contracts: Enabling Documents provides materials that clarify the authority for Federal agencies to enter into utility energy services contracts (UESCs), as well as sample documents and resources to ease utility partnership contracting.

  1. An Architecture to Enable Future Sensor Webs

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Caffrey, Robert; Frye, Stu; Grosvenor, Sandra; Hess, Melissa; Chien, Steve; Sherwood, Rob; Davies, Ashley; Hayden, Sandra; Sweet, Adam

    2004-01-01

    A sensor web is a coherent set of distributed 'nodes', interconnected by a communications fabric, that collectively behave as a single dynamic observing system. A 'plug and play' mission architecture enables progressive mission autonomy and rapid assembly and thereby enables sensor webs. This viewgraph presentation addresses: Target mission messaging architecture; Strategy to establish architecture; Progressive autonomy with onboard sensor web; EO-1; Adaptive array antennas (smart antennas) for satellite ground stations.

  2. ISS - Enabling Exploration Through Docking Standards

    NASA Technical Reports Server (NTRS)

    Hatfield, Caris A.

    2011-01-01

    NASA and the ISS partnership are jointly developing a key standard to enable future collaborative exploration. The IDSS is based on flight proven design while incorporating new low impact technology. Low impact technology accommodates a wide range of vehicle contact and capture conditions. This standard will get early demonstration on the ISS. Experience gained here will enable operational experience and the opportunity to refine the standard.

  3. Accurate LC Peak Boundary Detection for 16O/18O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  4. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  5. New journal: Algorithms for Molecular Biology.

    PubMed

    Morgenstern, Burkhard; Stadler, Peter F

    2006-01-01

    This editorial announces Algorithms for Molecular Biology, a new online open access journal published by BioMed Central. By launching the first open access journal on algorithmic bioinformatics, we provide a forum for fast publication of high-quality research articles in this rapidly evolving field. Our journal will publish thoroughly peer-reviewed papers without length limitations covering all aspects of algorithmic data analysis in computational biology. Publications in Algorithms for Molecular Biology are easy to find, highly visible and tracked by organisations such as PubMed. An established online submission system makes a fast reviewing procedure possible and enables us to publish accepted papers without delay. All articles published in our journal are permanently archived by PubMed Central and other scientific archives. We are looking forward to receiving your contributions. PMID:16722576

  6. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  7. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  8. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  9. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  10. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  11. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  12. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  13. An input shaping controller enabling cranes to move without sway

    SciTech Connect

    Singer, N.; Singhose, W.; Kriikku, E.

    1997-06-01

    A gantry crane at the Savannah River Technology Center was retrofitted with an Input Shaping controller. The controller intercepts the operator`s pendant commands and modifies them in real time so that the crane is moved without residual sway in the suspended load. Mechanical components on the crane were modified to make the crane suitable for the anti-sway algorithm. This paper will describe the required mechanical modifications to the crane, as well as, a new form of Input Shaping that was developed for use on the crane. Experimental results are presented which demonstrate the effectiveness of the new process. Several practical considerations will be discussed including a novel (patent pending) approach for making small, accurate moves without residual oscillations.

  14. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  15. Early identification of potentially salvageable tissue with MRI-based predictive algorithms after experimental ischemic stroke

    PubMed Central

    Bouts, Mark J R J; Tiebosch, Ivo A C W; van der Toorn, Annette; Viergever, Max A; Wu, Ona; Dijkhuizen, Rick M

    2013-01-01

    Individualized stroke treatment decisions can be improved by accurate identification of the extent of salvageable tissue. Magnetic resonance imaging (MRI)-based approaches, including measurement of a ‘perfusion-diffusion mismatch' and calculation of infarction probability, allow assessment of tissue-at-risk; however, the ability to explicitly depict potentially salvageable tissue remains uncertain. In this study, five predictive algorithms (generalized linear model (GLM), generalized additive model, support vector machine, adaptive boosting, and random forest) were tested in their potency to depict acute cerebral ischemic tissue that can recover after reperfusion. Acute T2-, diffusion-, and perfusion-weighted MRI, and follow-up T2 maps were collected from rats subjected to right-sided middle cerebral artery occlusion without subsequent reperfusion, for training of algorithms (Group I), and with spontaneous (Group II) or thrombolysis-induced reperfusion (Group III), to determine infarction probability-based viability thresholds and prediction accuracies. The infarction probability difference between irreversible—i.e., infarcted after reperfusion—and salvageable tissue injury—i.e., noninfarcted after reperfusion—was largest for GLM (20±7%) with highest accuracy of risk-based identification of acutely ischemic tissue that could recover on subsequent reperfusion (Dice's similarity index=0.79±0.14). Our study shows that assessment of the heterogeneity of infarction probability with MRI-based algorithms enables estimation of the extent of potentially salvageable tissue after acute ischemic stroke. PMID:23571283

  16. Investigation of different sparsity transforms for the PICCS algorithm in small-animal respiratory gated CT.

    PubMed

    Abascal, Juan F P J; Abella, Monica; Sisniega, Alejandro; Vaquero, Juan Jose; Desco, Manuel

    2015-01-01

    Respiratory gating helps to overcome the problem of breathing motion in cardiothoracic small-animal imaging by acquiring multiple images for each projection angle and then assigning projections to different phases. When this approach is used with a dose similar to that of a static acquisition, a low number of noisy projections are available for the reconstruction of each respiratory phase, thus leading to streak artifacts in the reconstructed images. This problem can be alleviated using a prior image constrained compressed sensing (PICCS) algorithm, which enables accurate reconstruction of highly undersampled data when a prior image is available. We compared variants of the PICCS algorithm with different transforms in the prior penalty function: gradient, unitary, and wavelet transform. In all cases the problem was solved using the Split Bregman approach, which is efficient for convex constrained optimization. The algorithms were evaluated using simulations generated from data previously acquired on a micro-CT scanner following a high-dose protocol (four times the dose of a standard static protocol). The resulting data were used to simulate scenarios with different dose levels and numbers of projections. All compressed sensing methods performed very similarly in terms of noise, spatiotemporal resolution, and streak reduction, and filtered back-projection was greatly improved. Nevertheless, the wavelet domain was found to be less prone to patchy cartoon-like artifacts than the commonly used gradient domain. PMID:25836670

  17. An overview of the CATS level 1 processing algorithms and data products

    NASA Astrophysics Data System (ADS)

    Yorks, J. E.; McGill, M. J.; Palm, S. P.; Hlavka, D. L.; Selmer, P. A.; Nowottnick, E. P.; Vaughan, M. A.; Rodier, S. D.; Hart, W. D.

    2016-05-01

    The Cloud-Aerosol Transport System (CATS) is an elastic backscatter lidar that was launched on 10 January 2015 to the International Space Station (ISS). CATS provides both space-based technology demonstrations for future Earth Science missions and operational science measurements. This paper outlines the CATS Level 1 data products and processing algorithms. Initial results and validation data demonstrate the ability to accurately detect optically thin atmospheric layers with 1064 nm nighttime backscatter as low as 5.0E-5 km-1 sr-1. This sensitivity, along with the orbital characteristics of the ISS, enables the use of CATS data for cloud and aerosol climate studies. The near-real-time downlinking and processing of CATS data are unprecedented capabilities and provide data that have applications such as forecasting of volcanic plume transport for aviation safety and aerosol vertical structure that will improve air quality health alerts globally.

  18. Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection

    PubMed Central

    Offman, Marc N; Tournier, Alexander L; Bates, Paul A

    2008-01-01

    Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557

  19. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  20. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  1. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  2. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  3. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  4. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  5. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  6. An Enabling Technology for New Planning and Scheduling Paradigms

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth

    2004-01-01

    The Night Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called ?ask models," from the scientists and technologists for the tasks that are to be scheduled. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next, a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, the models are modified to be compatible with the scheduling engine. Then the models are submitted to the scheduling engine for automatic scheduling or, when requirements are expressed in notes, the timeline is built manually. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components: (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphical methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the

  7. Protective jacket enabling decision support for workers in cold climate.

    PubMed

    Seeberg, Trine M; Vardoy, Astrid-Sofie B; Austad, Hanne O; Wiggen, Oystein; Stenersen, Henning S; Liverud, Anders E; Storholmen, Tore Christian B; Faerevik, Hilde

    2013-01-01

    The cold and harsh climate in the High North represents a threat to safety and work performance. The aim of this study was to show that sensors integrated in clothing can provide information that can improve decision support for workers in cold climate without disturbing the user. Here, a wireless demonstrator consisting of a working jacket with integrated temperature, humidity and activity sensors has been developed. Preliminary results indicate that the demonstrator can provide easy accessible information about the thermal conditions at the site of the worker and local cooling effects of extremities. The demonstrator has the ability to distinguish between activity and rest, and enables implementation of more sophisticated sensor fusion algorithms to assess work load and pre-defined activities. This information can be used in an enhanced safety perspective as an improved tool to advice outdoor work control for workers in cold climate. PMID:24111230

  8. Development of an Accurate Feed-Forward Temperature Control Tankless Water Heater

    SciTech Connect

    David Yuill

    2008-06-30

    The following document is the final report for DE-FC26-05NT42327: Development of an Accurate Feed-Forward Temperature Control Tankless Water Heater. This work was carried out under a cooperative agreement from the Department of Energy's National Energy Technology Laboratory, with additional funding from Keltech, Inc. The objective of the project was to improve the temperature control performance of an electric tankless water heater (TWH). The reason for doing this is to minimize or eliminate one of the barriers to wider adoption of the TWH. TWH use less energy than typical (storage) water heaters because of the elimination of standby losses, so wider adoption will lead to reduced energy consumption. The project was carried out by Building Solutions, Inc. (BSI), a small business based in Omaha, Nebraska. BSI partnered with Keltech, Inc., a manufacturer of electric tankless water heaters based in Delton, Michigan. Additional work was carried out by the University of Nebraska and Mike Coward. A background study revealed several advantages and disadvantages to TWH. Besides using less energy than storage heaters, TWH provide an endless supply of hot water, have a longer life, use less floor space, can be used at point-of-use, and are suitable as boosters to enable alternative water heating technologies, such as solar or heat-pump water heaters. Their disadvantages are their higher cost, large instantaneous power requirement, and poor temperature control. A test method was developed to quantify performance under a representative range of disturbances to flow rate and inlet temperature. A device capable of conducting this test was designed and built. Some heaters currently on the market were tested, and were found to perform quite poorly. A new controller was designed using model predictive control (MPC). This control method required an accurate dynamic model to be created and required significant tuning to the controller before good control was achieved. The MPC design

  9. Achieving target voriconazole concentrations more accurately in children and adolescents.

    PubMed

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). PMID:25779580

  10. Achieving Target Voriconazole Concentrations More Accurately in Children and Adolescents

    PubMed Central

    Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, −17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was −0.7% (interquartile range, −7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, −1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was −6.7% (interquartile range, −18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, −13.1 to 18%; P = 0.32) and a dose bias of −3.5% (interquartile range, −18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.) PMID:25779580

  11. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  12. New Attitude Sensor Alignment Calibration Algorithms

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Sedlak, Joseph E.; Harman, Richard (Technical Monitor)

    2002-01-01

    Accurate spacecraft attitudes may only be obtained if the primary attitude sensors are well calibrated. Launch shock, relaxation of gravitational stresses and similar effects often produce large enough alignment shifts so that on-orbit alignment calibration is necessary if attitude accuracy requirements are to be met. A variety of attitude sensor alignment algorithms have been developed to meet the need for on-orbit calibration. Two new algorithms are presented here: ALICAL and ALIQUEST. Each of these has advantages in particular circumstances. ALICAL is an attitude independent algorithm that uses near simultaneous measurements from two or more sensors to produce accurate sensor alignments. For each set of simultaneous observations the attitude is overdetermined. The information content of the extra degrees of freedom can be combined over numerous sets to provide the sensor alignments. ALIQUEST is an attitude dependent algorithm that combines sensor and attitude data into a loss function that has the same mathematical form as the Wahba problem. Alignments can then be determined using any of the algorithms (such as the QUEST quaternion estimator) that have been developed to solve the Wahba problem for attitude. Results from the use of these methods on active missions are presented.

  13. An algorithm to calculate a collapsed arc dose matrix in volumetric modulated arc therapy

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael; Holloway, Lois

    2013-07-15

    Purpose: The delivery of volumetric modulated arc therapy (VMAT) is more complex than other conformal radiotherapy techniques. In this work, the authors present the feasibility of performing routine verification of VMAT delivery using a dose matrix measured by a gantry mounted 2D ion chamber array and corresponding dose matrix calculated by an inhouse developed algorithm.Methods: Pinnacle, v9.0, treatment planning system (TPS) was used in this study to generate VMAT plans for a 6 MV photon beam from an Elekta-Synergy linear accelerator. An algorithm was developed and implemented with inhouse computer code to calculate the dose matrix resulting from a VMAT arc in a plane perpendicular to the beam at isocenter. The algorithm was validated using measurement of standard patterns and clinical VMAT plans with a 2D ion chamber array. The clinical VMAT plans were also validated using ArcCHECK measurements. The measured and calculated dose matrices were compared using gamma ({gamma}) analysis with 3%/3 mm criteria and {gamma} tolerance of 1.Results: The dose matrix comparison of standard patterns has shown excellent agreement with the mean {gamma} pass rate 97.7 ({sigma}= 0.4)%. The validation of clinical VMAT plans using the dose matrix predicted by the algorithm and the corresponding measured dose matrices also showed good agreement with the mean {gamma} pass rate of 97.6 ({sigma}= 1.6)%. The validation of clinical VMAT plans using ArcCHECK measurements showed a mean pass rate of 95.6 ({sigma}= 1.8)%.Conclusions: The developed algorithm was shown to accurately predict the dose matrix, in a plane perpendicular to the beam, by considering all possible leaf trajectories in a VMAT delivery. This enables the verification of VMAT delivery using a 2D array detector mounted on a treatment head.

  14. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  15. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  16. Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction

    NASA Astrophysics Data System (ADS)

    Junkin, Gary

    2013-04-01

    With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.

  17. Improved local linearization algorithm for solving the quaternion equations

    NASA Technical Reports Server (NTRS)

    Yen, K.; Cook, G.

    1980-01-01

    The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.

  18. Accurate molecular classification of cancer using simple rules

    PubMed Central

    Wang, Xiaosheng; Gotoh, Osamu

    2009-01-01

    Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV) of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML]), lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML). Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction. PMID:19874631

  19. New Labour and the enabling state.

    PubMed

    Taylor, Ian

    2000-11-01

    The notion of the 'enabling state' gained currency in the UK during the 1990s as an alternative to the 'providing' or the welfare state. It reflected the process of contracting out in the NHS and compulsory competitive tendering (CCT) in local government during the 1980s, but was also associated with developments during the 1990s in health, social care and education in particular. The creation of an internal market in the NHS and the associated purchaser-provider split appeared to transfer 'ownership' of services increasingly to the providers - hospitals, General Practitioners (GPs) and schools. The mixed economy of care that was stimulated by the 1990 NHS and Community Care Act appeared to offer local authorities the opportunity to enable non state providers to offer care services in the community. The new service charters were part of the enablement process because they offered users more opportunity to influence provision. This article examines how far service providers were enabled and assesses the extent to which new Labour's policies enhance or reject the 'enabling state' in favour of more direct provision. PMID:11560707

  20. On enabling secure applications through off-line biometric identification

    SciTech Connect

    Davida, G.I.; Frankel, Y.; Matt, B.J.

    1998-04-01

    In developing secure applications and systems, the designers often must incorporate secure user identification in the design specification. In this paper, the authors study secure off line authenticated user identification schemes based on a biometric system that can measure a user`s biometric accurately (up to some Hamming distance). The schemes presented here enhance identification and authorization in secure applications by binding a biometric template with authorization information on a token such as a magnetic strip. Also developed here are schemes specifically designed to minimize the compromise of a user`s private biometrics data, encapsulated in the authorization information, without requiring secure hardware tokens. In this paper the authors furthermore study the feasibility of biometrics performing as an enabling technology for secure system and application design. The authors investigate a new technology which allows a user`s biometrics to facilitate cryptographic mechanisms.

  1. Simple, fast and accurate eight points amplitude estimation method of sinusoidal signals for DSP based instrumentation

    NASA Astrophysics Data System (ADS)

    Vizireanu, D. N.; Halunga, S. V.

    2012-04-01

    A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.

  2. Bicluster Sampled Coherence Metric (BSCM) provides an accurate environmental context for phenotype predictions

    PubMed Central

    2015-01-01

    Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257

  3. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  4. Accurate, fully-automated NMR spectral profiling for metabolomics.

    PubMed

    Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in

  5. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  6. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  7. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  8. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  9. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  10. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  11. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  12. Extremely Accurate On-Orbit Position Accuracy using TDRSS

    NASA Technical Reports Server (NTRS)

    Stocklin, Frank; Toral, Marco; Bar-Sever, Yoaz; Rush, John

    2006-01-01

    NASA is planning to launch a new service for Earth satellites providing them with precise GPS differential corrections and other ancillary information enabling decimeter level orbit determination accuracy and nanosecond time-transfer accuracy, onboard, in real-time. The TDRSS Augmentation Service for Satellites (TASS) will broadcast its message on the S-band multiple access forward channel of NASA s Tracking and Data Relay Satellite System (TDRSS). The satellite's phase array antenna has been configured to provide a wide beam, extending coverage up to 1000 km altitude over the poles. Global coverage will be ensured with broadcast from three or more TDRSS satellites. The GPS differential corrections are provided by the NASA Global Differential GPS (GDGPS) System, developed and operated by JPL. The GDGPS System employs global ground network of more than 70 GPS receivers to monitor the GPS constellation in real time. The system provides real-time estimates of the GPS satellite states, as well as many other real-time products such as differential corrections, global ionospheric maps, and integrity monitoring. The unique multiply redundant architecture of the GDGPS System ensures very high reliability, with 99.999% demonstrated since the inception of the system in early 2000. The estimated real time GPS orbit and clock states provided by the GDGPS system are accurate to better than 20 cm 3D RMS, and have been demonstrated to support sub-decimeter real time positioning and orbit determination for a variety of terrestrial, airborne, and spaceborne applications. In addition to the GPS differential corrections, TASS will provide real-time Earth orientation and solar flux information that enable precise onboard knowledge of the Earth-fixed position of the spacecraft, and precise orbit prediction and planning capabilities. TASS will also provide 5 seconds alarms for GPS integrity failures based on the unique GPS integrity monitoring service of the GDGPS System.

  13. Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements

    SciTech Connect

    Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel

    2010-11-10

    In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.

  14. Implementation and analysis of a fast backprojection algorithm

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Majumder, Uttam K.; Buxa, Peter; Backues, Mark J.; Lindgren, Andrew C.

    2006-05-01

    The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.

  15. Dynamic gate algorithm for multimode fiber Bragg grating sensor systems.

    PubMed

    Ganziy, D; Jespersen, O; Woyessa, G; Rose, B; Bang, O

    2015-06-20

    We propose a novel dynamic gate algorithm (DGA) for precise and accurate peak detection. The algorithm uses a threshold-determined detection window and center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of the signal-to-noise ratio and different peak shapes. Our simulations and experiments demonstrate that the DGA method is fast and robust with better stability and accuracy than conventional algorithms. This makes it very attractive for future implementation in sensing systems, especially based on multimode fiber Bragg gratings. PMID:26193010

  16. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  17. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  18. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  19. Algorithm For Control Of Large Antenna

    NASA Technical Reports Server (NTRS)

    Hill, Robert E.

    1990-01-01

    Alternative position-error feedback modes provided. Modern control theory basis for computer algorithm used to control two-axis positioning of large antenna. Algorithm - incorporated into software of real-time control computer - enables rapid intertarget positioning as well as precise tracking (using one of two optional position-feedback modes) without need of human operator intervention. Control system for one axis of two-axis azimuth/elevation control system embodied mostly in software based on advanced control theory. System has linear properties of classical linear feedback controller. Performance described by bandwidth and linear error coefficients.

  20. Energy Savings Performance Contract (ESPC) ENABLE Program

    SciTech Connect

    2012-06-01

    The Energy Savings Performance Contract (ESPC) ENABLE program, a new project funding approach, allows small Federal facilities to realize energy and water savings in six months or less. ESPC ENABLE provides a standardized and streamlined process to install targeted energy conservation measures (ECMs) such as lighting, water, and controls with measurement and verification (M&V) appropriate for the size and scope of the project. This allows Federal facilities smaller than 200,000 square feet to make progress towards important energy efficiency and water conservation requirements.