Sample records for centered mfxdma algorithms

  1. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

  2. Finding topological center of a geographic space via road network

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You

    2015-02-01

    Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.

  3. A Modified Artificial Bee Colony Algorithm for p-Center Problems

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The objective of the p-center problem is to locate p-centers on a network such that the maximum of the distances from each node to its nearest center is minimized. The artificial bee colony algorithm is a swarm-based meta-heuristic algorithm that mimics the foraging behavior of honey bee colonies. This study proposes a modified ABC algorithm that benefits from a variety of search strategies to balance exploration and exploitation. Moreover, random key-based coding schemes are used to solve the p-center problem effectively. The proposed algorithm is compared to state-of-the-art techniques using different benchmark problems, and computational results reveal that the proposed approach is very efficient. PMID:24616648

  4. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  5. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  6. Eye center localization and gaze gesture recognition for human-computer interaction.

    PubMed

    Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul

    2016-03-01

    This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.

  7. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragão de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragão de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  8. Center Finding Algorithm on slit mask point source for IGRINS (Immersion GRating INfrared Spectrograph)

    NASA Astrophysics Data System (ADS)

    Lee, Hye-In; Pak, Soojong; Lee, Jae-Joon; Mace, Gregory N.; Jaffe, Daniel Thomas

    2017-06-01

    We developed an observation control software for the IGRINS (Immersion Grating Infrared Spectrograph) silt-viewing camera module, which points the astronomical target onto the spectroscopy slit and sends tracking feedbacks to the telescope control system (TCS). The point spread function (PSF) image is not following symmetric Gaussian profile. In addition, bright targets are easily saturated and shown as a donut shape. It is not trivial to define and find the center of the asymmetric PSF especially when most of the stellar PSF falls inside the slit. We made a center balancing algorithm (CBA) which derives the expected center position along the slit-width axis by referencing the stray flux ratios of both upper and lower sides of the slit. We compared accuracies of the CBA and those of a two-dimensional Gaussian fitting (2DGA) through simulations in order to evaluate the center finding algorithms. These methods were then verified with observational data. In this poster, we present the results of our tests and suggest a new algorithm for centering targets in the slit image of a spectrograph.

  9. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  10. High-precision positioning system of four-quadrant detector based on the database query

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang

    2015-02-01

    The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.

  11. Halftoning Algorithms and Systems.

    DTIC Science & Technology

    1996-08-01

    TERMS 15. NUMBER IF PAGESi. Halftoning algorithms; error diffusions ; color printing; topographic maps 16. PRICE CODE 17. SECURITY CLASSIFICATION 18...graylevels for each screen level. In the case of error diffusion algorithms, the calibration procedure using the new centering concept manifests itself as a...Novel Centering Concept for Overlapping Correction Paper / Transparency (Patent Applied 5/94)I * Applications To Error Diffusion * To Dithering (IS&T

  12. Airborne Wind Profiling Algorithms for the Pulsed 2-Micron Coherent Doppler Lidar at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.

    2013-01-01

    Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.

  13. A fast parallel clustering algorithm for molecular simulation trajectories.

    PubMed

    Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui

    2013-01-15

    We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  14. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  15. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  16. Scheduling logic for Miles-In-Trail traffic management

    NASA Technical Reports Server (NTRS)

    Synnestvedt, Robert G.; Swenson, Harry; Erzberger, Heinz

    1995-01-01

    This paper presents an algorithm which can be used for scheduling arrival air traffic in an Air Route Traffic Control Center (ARTCC or Center) entering a Terminal Radar Approach Control (TRACON) Facility . The algorithm aids a Traffic Management Coordinator (TMC) in deciding how to restrict traffic while the traffic expected to arrive in the TRACON exceeds the TRACON capacity. The restrictions employed fall under the category of Miles-in-Trail, one of two principal traffic separation techniques used in scheduling arrival traffic . The algorithm calculates aircraft separations for each stream of aircraft destined to the TRACON. The calculations depend upon TRACON characteristics, TMC preferences, and other parameters adapted to the specific needs of scheduling traffic in a Center. Some preliminary results of traffic simulations scheduled by this algorithm are presented, and conclusions are drawn as to the effectiveness of using this algorithm in different traffic scenarios.

  17. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  18. Logistics Distribution Center Location Evaluation Based on Genetic Algorithm and Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Shao, Yuxiang; Chen, Qing; Wei, Zhenhua

    Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.

  19. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.

  20. An improved initialization center k-means clustering algorithm based on distance and density

    NASA Astrophysics Data System (ADS)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  1. LPA-CBD an improved label propagation algorithm based on community belonging degree for community detection

    NASA Astrophysics Data System (ADS)

    Gui, Chun; Zhang, Ruisheng; Zhao, Zhili; Wei, Jiaxuan; Hu, Rongjing

    In order to deal with stochasticity in center node selection and instability in community detection of label propagation algorithm, this paper proposes an improved label propagation algorithm named label propagation algorithm based on community belonging degree (LPA-CBD) that employs community belonging degree to determine the number and the center of community. The general process of LPA-CBD is that the initial community is identified by the nodes with the maximum degree, and then it is optimized or expanded by community belonging degree. After getting the rough structure of network community, the remaining nodes are labeled by using label propagation algorithm. The experimental results on 10 real-world networks and three synthetic networks show that LPA-CBD achieves reasonable community number, better algorithm accuracy and higher modularity compared with other four prominent algorithms. Moreover, the proposed algorithm not only has lower algorithm complexity and higher community detection quality, but also improves the stability of the original label propagation algorithm.

  2. A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering

    ERIC Educational Resources Information Center

    Chahine, Firas Safwan

    2012-01-01

    Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…

  3. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  4. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine

    2009-03-05

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  5. Model-based sphere localization (MBSL) in x-ray projections

    NASA Astrophysics Data System (ADS)

    Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc

    2017-08-01

    The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.

  6. Soil water balance calculation using a two source energy balance model and wireless sensor arrays aboard a center pivot

    USDA-ARS?s Scientific Manuscript database

    Recent developments in wireless sensor technology and remote sensing algorithms, coupled with increased use of center pivot irrigation systems, have removed several long-standing barriers to adoption of remote sensing for real-time irrigation management. One remote sensing-based algorithm is a two s...

  7. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  8. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  9. An AK-LDMeans algorithm based on image clustering

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  10. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    NASA Astrophysics Data System (ADS)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  11. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    PubMed

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.

  12. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  13. Impact of data fragmentation across healthcare centers on the accuracy of a high-throughput clinical phenotyping algorithm for specifying subjects with type 2 diabetes mellitus

    PubMed Central

    Wei, Wei-Qi; Leibson, Cynthia L; Ransom, Jeanine E; Kho, Abel N; Caraballo, Pedro J; Chai, High Seng; Yawn, Barbara P; Pacheco, Jennifer A

    2012-01-01

    Objective To evaluate data fragmentation across healthcare centers with regard to the accuracy of a high-throughput clinical phenotyping (HTCP) algorithm developed to differentiate (1) patients with type 2 diabetes mellitus (T2DM) and (2) patients with no diabetes. Materials and methods This population-based study identified all Olmsted County, Minnesota residents in 2007. We used provider-linked electronic medical record data from the two healthcare centers that provide >95% of all care to County residents (ie, Olmsted Medical Center and Mayo Clinic in Rochester, Minnesota, USA). Subjects were limited to residents with one or more encounter January 1, 2006 through December 31, 2007 at both healthcare centers. DM-relevant data on diagnoses, laboratory results, and medication from both centers were obtained during this period. The algorithm was first executed using data from both centers (ie, the gold standard) and then from Mayo Clinic alone. Positive predictive values and false-negative rates were calculated, and the McNemar test was used to compare categorization when data from the Mayo Clinic alone were used with the gold standard. Age and sex were compared between true-positive and false-negative subjects with T2DM. Statistical significance was accepted as p<0.05. Results With data from both medical centers, 765 subjects with T2DM (4256 non-DM subjects) were identified. When single-center data were used, 252 T2DM subjects (1573 non-DM subjects) were missed; an additional false-positive 27 T2DM subjects (215 non-DM subjects) were identified. The positive predictive values and false-negative rates were 95.0% (513/540) and 32.9% (252/765), respectively, for T2DM subjects and 92.6% (2683/2898) and 37.0% (1573/4256), respectively, for non-DM subjects. Age and sex distribution differed between true-positive (mean age 62.1; 45% female) and false-negative (mean age 65.0; 56.0% female) T2DM subjects. Conclusion The findings show that application of an HTCP algorithm using data from a single medical center contributes to misclassification. These findings should be considered carefully by researchers when developing and executing HTCP algorithms. PMID:22249968

  14. Information Clustering Based on Fuzzy Multisets.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  15. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  16. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  17. Flight Evaluation of an Aircraft with Side and Center Stick Controllers and Rate-Limited Ailerons

    NASA Technical Reports Server (NTRS)

    Deppe, P. R.; Chalk, C. R.; Shafer, M. F.

    1996-01-01

    As part of an ongoing government and industry effort to study the flying qualities of aircraft with rate-limited control surface actuators, two studies were previously flown to examine an algorithm developed to reduce the tendency for pilot-induced oscillation when rate limiting occurs. This algorithm, when working properly, greatly improved the performance of the aircraft in the first study. In the second study, however, the algorithm did not initially offer as much improvement. The differences between the two studies caused concern. The study detailed in this paper was performed to determine whether the performance of the algorithm was affected by the characteristics of the cockpit controllers. Time delay and flight control system noise were also briefly evaluated. An in-flight simulator, the Calspan Learjet 25, was programmed with a low roll actuator rate limit, and the algorithm was programmed into the flight control system. Side- and center-stick controllers, force and position command signals, a rate-limited feel system, a low-frequency feel system, and a feel system damper were evaluated. The flight program consisted of four flights and 38 evaluations of test configurations. Performance of the algorithm was determined to be unaffected by using side- or center-stick controllers or force or position command signals. The rate-limited feel system performed as well as the rate-limiting algorithm but was disliked by the pilots. The low-frequency feel system and the feel system damper were ineffective. Time delay and noise were determined to degrade the performance of the algorithm.

  18. Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    NASA Astrophysics Data System (ADS)

    Bae, Eun-Hyon; Lee, Kyun-Kyung

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  19. A View from Above Without Leaving the Ground

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In order to deliver accurate geospatial data and imagery to the remote sensing community, NASA is constantly developing new image-processing algorithms while refining existing ones for technical improvement. For 8 years, the NASA Regional Applications Center at Florida International University has served as a test bed for implementing and validating many of these algorithms, helping the Space Program to fulfill its strategic and educational goals in the area of remote sensing. The algorithms in return have helped the NASA Regional Applications Center develop comprehensive semantic database systems for data management, as well as new tools for disseminating geospatial information via the Internet.

  20. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    PubMed

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in PSIR images (n = 49). The EWA algorithm was validated experimentally and in patient data with a low bias in both IR and PSIR LGE images. Thus, the use of EM and a weighted intensity as in the EWA algorithm, may serve as a clinical standard for the quantification of myocardial infarction in LGE CMR images. CHILL-MI: NCT01379261 . NCT01374321 .

  1. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  2. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  3. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  4. Analysing the Effects of Different Land Cover Types on Land Surface Temperature Using Satellite Data

    NASA Astrophysics Data System (ADS)

    Şekertekin, A.; Kutoglu, Ş. H.; Kaya, S.; Marangoz, A. M.

    2015-12-01

    Monitoring Land Surface Temperature (LST) via remote sensing images is one of the most important contributions to climatology. LST is an important parameter governing the energy balance on the Earth and it also helps us to understand the behavior of urban heat islands. There are lots of algorithms to obtain LST by remote sensing techniques. The most commonly used algorithms are split-window algorithm, temperature/emissivity separation method, mono-window algorithm and single channel method. In this research, mono window algorithm was implemented to Landsat 5 TM image acquired on 28.08.2011. Besides, meteorological data such as humidity and temperature are used in the algorithm. Moreover, high resolution Geoeye-1 and Worldview-2 images acquired on 29.08.2011 and 12.07.2013 respectively were used to investigate the relationships between LST and land cover type. As a result of the analyses, area with vegetation cover has approximately 5 ºC lower temperatures than the city center and arid land., LST values change about 10 ºC in the city center because of different surface properties such as reinforced concrete construction, green zones and sandbank. The temperature around some places in thermal power plant region (ÇATES and ZETES) Çatalağzı, is about 5 ºC higher than city center. Sandbank and agricultural areas have highest temperature due to the land cover structure.

  5. High spatial resolution technique for SPECT using a fan-beam collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichihar, T.; Nambu, K.; Motomura, N.

    1993-08-01

    The physical characteristics of the collimator cause degradation of resolution with increasing distance from the collimator surface. A new convolutional backprojection algorithm has been derived for fanbeam SPECT data without rebinding into parallel beam geometry. The projections are filtered and then backprojected into the area within an isosceles triangle whose vertex is the focal point of the fan-beam and whose base is the fan-beam collimator face, and outside of the circle whose center is located midway between the focal point and the center of rotation and whose diameter is the distance between the focal point and the center of rotation.more » Consequently the backprojected area is close to the collimator surface. This algorithm has been implemented on a GCA-9300A SPECT system showing good results with both phantom and patient studies. The SPECT transaxial resolution was 4.6mm FWHM (reconstructed image matrix size of 256x256) at the center of SPECT FOV using UHR (ultra-high-resolution) fan beam collimators for brain study. Clinically, Tc-99m HMPAO and Tc-99m ECD brain data were reconstructed using this algorithm. The reconstruction results were compared with MRI images of the same slice position and showed significantly improved over results obtained with standard reconstruction algorithms.« less

  6. Performance evaluation of multi-stratum resources integrated resilience for software defined inter-data center interconnect.

    PubMed

    Yang, Hui; Zhang, Jie; Zhao, Yongli; Ji, Yuefeng; Wu, Jialin; Lin, Yi; Han, Jianrui; Lee, Young

    2015-05-18

    Inter-data center interconnect with IP over elastic optical network (EON) is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resources integration among IP networks, optical networks and application stratums resources that allows to accommodate data center services. In view of this, this study extends to consider the service resilience in case of edge optical node failure. We propose a novel multi-stratum resources integrated resilience (MSRIR) architecture for the services in software defined inter-data center interconnect based on IP over EON. A global resources integrated resilience (GRIR) algorithm is introduced based on the proposed architecture. The MSRIR can enable cross stratum optimization and provide resilience using the multiple stratums resources, and enhance the data center service resilience responsiveness to the dynamic end-to-end service demands. The overall feasibility and efficiency of the proposed architecture is experimentally verified on the control plane of our OpenFlow-based enhanced SDN (eSDN) testbed. The performance of GRIR algorithm under heavy traffic load scenario is also quantitatively evaluated based on MSRIR architecture in terms of path blocking probability, resilience latency and resource utilization, compared with other resilience algorithms.

  7. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  8. Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Ellison, Charles Leland

    Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.

  9. Momentum Advection on a Staggered Mesh

    NASA Astrophysics Data System (ADS)

    Benson, David J.

    1992-05-01

    Eulerian and ALE (arbitrary Lagrangian-Eulerian) hydrodynamics programs usually split a timestep into two parts. The first part is a Lagrangian step, which calculates the incremental motion of the material. The second part is referred to as the Eulerian step, the advection step, or the remap step, and it accounts for the transport of material between cells. In most finite difference and finite element formulations, all the solution variables except the velocities are cell-centered while the velocities are edge- or vertex-centered. As a result, the advection algorithm for the momentum is, by necessity, different than the algorithm used for the other variables. This paper reviews three momentum advection methods and proposes a new one. One method, pioneered in YAQUI, creates a new staggered mesh, while the other two, used in SALE and SHALE, are cell-centered. The new method is cell-centered and its relationship to the other methods is discussed. Both pure advection and strong shock calculations are presented to substantiate the mathematical analysis. From the standpoint of numerical accuracy, both the staggered mesh and the cell-centered algorithms can give good results, while the computational costs are highly dependent on the overall architecture of a code.

  10. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  11. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  12. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  13. Efficient evaluation of three-center Coulomb integrals

    PubMed Central

    Samu, Gyula

    2017-01-01

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara–Saika, McMurchie–Davidson, Gill–Head-Gordon–Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara–Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill–Head-Gordon–Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout. PMID:28571354

  14. Efficient evaluation of three-center Coulomb integrals.

    PubMed

    Samu, Gyula; Kállay, Mihály

    2017-05-28

    In this study we pursue the most efficient paths for the evaluation of three-center electron repulsion integrals (ERIs) over solid harmonic Gaussian functions of various angular momenta. First, the adaptation of the well-established techniques developed for four-center ERIs, such as the Obara-Saika, McMurchie-Davidson, Gill-Head-Gordon-Pople, and Rys quadrature schemes, and the combinations thereof for three-center ERIs is discussed. Several algorithmic aspects, such as the order of the various operations and primitive loops as well as prescreening strategies, are analyzed. Second, the number of floating point operations (FLOPs) is estimated for the various algorithms derived, and based on these results the most promising ones are selected. We report the efficient implementation of the latter algorithms invoking automated programming techniques and also evaluate their practical performance. We conclude that the simplified Obara-Saika scheme of Ahlrichs is the most cost-effective one in the majority of cases, but the modified Gill-Head-Gordon-Pople and Rys algorithms proposed herein are preferred for particular shell triplets. Our numerical experiments also show that even though the solid harmonic transformation and the horizontal recurrence require significantly fewer FLOPs if performed at the contracted level, this approach does not improve the efficiency in practical cases. Instead, it is more advantageous to carry out these operations at the primitive level, which allows for more efficient integral prescreening and memory layout.

  15. Real-Gas Flow Properties for NASA Langley Research Center Aerothermodynamic Facilities Complex Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.

  16. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  17. EV Charging Algorithm Implementation with User Price Preference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Hu, Boyang; Qiu, Charlie

    2015-02-17

    in this paper, we propose and implement a smart Electric Vehicle (EV) charging algorithm to control the EV charging infrastructures according to users’ price preferences. EVSE (Electric Vehicle Supply Equipment), equipped with bidirectional communication devices and smart meters, can be remotely monitored by the proposed charging algorithm applied to EV control center and mobile app. On the server side, ARIMA model is utilized to fit historical charging load data and perform day-ahead prediction. A pricing strategy with energy bidding policy is proposed and implemented to generate a charging price list to be broadcasted to EV users through mobile app. Onmore » the user side, EV drivers can submit their price preferences and daily travel schedules to negotiate with Control Center to consume the expected energy and minimize charging cost simultaneously. The proposed algorithm is tested and validated through the experimental implementations in UCLA parking lots.« less

  18. Computations on the massively parallel processor at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  19. Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.

    PubMed

    Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho

    2011-01-01

    The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of <3% and estimate the remaining medication volume with an accuracy of >98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  20. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  1. Swarm Intelligence for Urban Dynamics Modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gerard H. E.

    2009-04-16

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  2. Decision algorithm for data center vortex beam receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2017-12-01

    We present a new scheme for a vortex beam communications system which exploits the radial component p of Laguerre-Gauss modes in addition to the azimuthal component l generally used. We derive a new encoding algorithm which makes use of the spatial distribution of intensity to create an alphabet dictionary for communication. We suggest an application of the scheme as part of an optical wireless link for intra data center communication. We investigate the probability of error in decoding, for several detector options.

  3. Swarm Intelligence for Urban Dynamics Modelling

    NASA Astrophysics Data System (ADS)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gérard H. E.

    2009-04-01

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  4. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    NASA Astrophysics Data System (ADS)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  5. Application of the SP algorithm to the INTERMAGNET magnetograms of the disturbed geomagnetic field

    NASA Astrophysics Data System (ADS)

    Sidorov, R. V.; Soloviev, A. A.; Bogoutdinov, Sh. R.

    2012-05-01

    The algorithmic system developed in the Laboratory of Geoinformatics at the Geophysical Center, Russian Academy of Sciences, which is intended for recognizing spikes on the magnetograms from the global network INTERMAGNET provides the possibility to carry out retrospective analysis of the magnetograms from the World Data Centers. Application of this system to the analysis of the magnetograms allows automating the job of the experts-interpreters on identifying the artificial spikes in the INTERMAGNET data. The present paper is focused on the SP algorithm (abbreviated from SPIKE) which recognizes artificial spikes on the records of the geomagnetic field. Initially, this algorithm was trained on the magnetograms of 2007 and 2008, which recorded the quiet geomagnetic field. The results of training and testing showed that the algorithm is quite efficient. Applying this method to the problem of recognizing spikes on the data for periods of enhanced geomagnetic activity is a separate task. In this short communication, we present the results of applying the SP algorithm trained on the data of 2007 to the INTERMAGNET magnetograms for 2003 and 2005 sampled every minute. This analysis shows that the SP algorithm does not exhibit a worse performance if applied to the records of a disturbed geomagnetic field.

  6. A system for automatic aorta sections measurements on chest CT

    NASA Astrophysics Data System (ADS)

    Pfeffer, Yitzchak; Mayer, Arnaldo; Zholkover, Adi; Konen, Eli

    2016-03-01

    A new method is proposed for caliber measurement of the ascending aorta (AA) and descending aorta (DA). A key component of the method is the automatic detection of the carina, as an anatomical landmark around which an axial volume of interest (VOI) can be defined to observe the aortic caliber. For each slice in the VOI, a linear profile line connecting the AA with the DA is found by pattern matching on the underlying intensity profile. Next, the aortic center position is found using Hough transform on the best linear segment candidate. Finally, region growing around the center provides an accurate segmentation and caliber measurement. We evaluated the algorithm on 113 sequential chest CT scans, slice thickness of 0.75 - 3.75mm, 90 with contrast agent injected. The algorithm success rates were computed as the percentage of scans in which the center of the AA was found. Automated measurements of AA caliber were compared with independent measurements of two experienced chest radiologists, comparing the absolute difference between the two radiologists with the absolute difference between the algorithm and each of the radiologists. The measurement stability was demonstrated by computing the STD of the absolute difference between the radiologists, and between the algorithm and the radiologists. Results: Success rates of 93% and 74% were achieved, for contrast injected cases and non-contrast cases, respectively. These results indicate that the algorithm can be robust in large variability of image quality, such as the cases in a realworld clinical setting. The average absolute difference between the algorithm and the radiologists was 1.85mm, lower than the average absolute difference between the radiologists, which was 2.1mm. The STD of the absolute difference between the algorithm and the radiologists was 1.5mm vs 1.6mm between the two radiologists. These results demonstrate the clinical relevance of the algorithm measurements.

  7. Decision Aids for Naval Air ASW

    DTIC Science & Technology

    1980-03-15

    Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A

  8. An extended affinity propagation clustering method based on different data density types.

    PubMed

    Zhao, XiuLi; Xu, WeiXiang

    2015-01-01

    Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.

  9. [Automatic Sleep Stage Classification Based on an Improved K-means Clustering Algorithm].

    PubMed

    Xiao, Shuyuan; Wang, Bei; Zhang, Jian; Zhang, Qunfeng; Zou, Junzhong

    2016-10-01

    Sleep stage scoring is a hotspot in the field of medicine and neuroscience.Visual inspection of sleep is laborious and the results may be subjective to different clinicians.Automatic sleep stage classification algorithm can be used to reduce the manual workload.However,there are still limitations when it encounters complicated and changeable clinical cases.The purpose of this paper is to develop an automatic sleep staging algorithm based on the characteristics of actual sleep data.In the proposed improved K-means clustering algorithm,points were selected as the initial centers by using a concept of density to avoid the randomness of the original K-means algorithm.Meanwhile,the cluster centers were updated according to the‘Three-Sigma Rule’during the iteration to abate the influence of the outliers.The proposed method was tested and analyzed on the overnight sleep data of the healthy persons and patients with sleep disorders after continuous positive airway pressure(CPAP)treatment.The automatic sleep stage classification results were compared with the visual inspection by qualified clinicians and the averaged accuracy reached 76%.With the analysis of morphological diversity of sleep data,it was proved that the proposed improved K-means algorithm was feasible and valid for clinical practice.

  10. Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode

    PubMed Central

    He, Shiwei; Song, Rui

    2016-01-01

    Service routes optimization (SRO) of pallet service center should meet customers' demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP) and Chinese postman problem (CPP), but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling) operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case. PMID:27528865

  11. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  12. Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Hawkins, Albin; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    NASA's first autonomous formation flying mission completed its primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center (GSFC) implemented a universal 3-axis formation flying algorithm in an autonomous executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard flight design and presents the validation results of this unique system. Results from functionality assessment through fully autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a standalone algorithm.

  13. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  14. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  15. [Design of longitudinal auto-tracking of the detector on X-ray in digital radiography].

    PubMed

    Yu, Xiaomin; Jiang, Tianhao; Liu, Zhihong; Zhao, Xu

    2018-04-01

    One algorithm is designed to implement longitudinal auto-tracking of the the detector on X-ray in the digital radiography system (DR) with manual collimator. In this study, when the longitudinal length of field of view (LFOV) on the detector is coincided with the longitudinal effective imaging size of the detector, the collimator half open angle ( Ψ ), the maximum centric distance ( e max ) between the center of X-ray field of view and the projection center of the focal spot, and the detector moving distance for auto-traking can be calculated automatically. When LFOV is smaller than the longitudinal effective imaging size of the detector by reducing Ψ , the e max can still be used to calculate the detector moving distance. Using this auto-tracking algorithm in DR with manual collimator, the tested results show that the X-ray projection is totally covered by the effective imaging area of the detector, although the center of the field of view is not aligned with the center of the effective imaging area of the detector. As a simple and low-cost design, the algorithm can be used for longitudinal auto-tracking of the detector on X-ray in the manual collimator DR.

  16. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  17. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    PubMed

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  18. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    PubMed

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  19. Sustainable IT and IT for Sustainability

    NASA Astrophysics Data System (ADS)

    Liu, Zhenhua

    Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information. The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center. The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge. To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

  20. Solvent-assisted multistage nonequilibrium electron transfer in rigid supramolecular systems: Diabatic free energy surfaces and algorithms for numerical simulations

    NASA Astrophysics Data System (ADS)

    Feskov, Serguei V.; Ivanov, Anatoly I.

    2018-03-01

    An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model supramolecular system involving four redox centers.

  1. Development a heuristic method to locate and allocate the medical centers to minimize the earthquake relief operation time.

    PubMed

    Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan

    2013-01-01

    Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.

  2. Use of a quality improvement tool, the prioritization matrix, to identify and prioritize triage software algorithm enhancement.

    PubMed

    North, Frederick; Varkey, Prathiba; Caraballo, Pedro; Vsetecka, Darlene; Bartel, Greg

    2007-10-11

    Complex decision support software can require significant effort in maintenance and enhancement. A quality improvement tool, the prioritization matrix, was successfully used to guide software enhancement of algorithms in a symptom assessment call center.

  3. Application of ant colony algorithm in path planning of the data center room robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Ma, Jianming; Wang, Ying

    2017-05-01

    According to the Internet Data Center (IDC) room patrol robot as the background, the robot in the search path of autonomous obstacle avoidance and path planning ability, worked out in advance of the robot room patrol mission. The simulation experimental results show that the improved ant colony algorithm for IDC room patrol robot obstacle avoidance planning, makes the robot along an optimal or suboptimal and safe obstacle avoidance path to reach the target point to complete the task. To prove the feasibility of the method.

  4. Observations of Ion Diffusion Regions in the Geomagnetic Tail

    NASA Astrophysics Data System (ADS)

    Rogers, A. J.; Farrugia, C. J.; Torbert, R. B.; Argall, M. R.; Strangeway, R. J.; Ergun, R.

    2017-12-01

    We present analysis of two Ion Diffusion Regions (IDRs) in the geomagnetic tail, as observed by the Magnetospheric Multiscale Mission (MMS). Analysis of each event is centered around discussion of parameters commonly associated with IDRs such as enhanced electric field magnitude, guiding center expansion parameter, and ion velocity. Characteristic values for these parameters are determined, as well as other common attributes of IDRs, and used to develop a searching algorithm to automate identification of possible IDRs for closer inspection. Preliminary results of applying this algorithm to in situ MMS observations are also presented

  5. The evaluation of the individual impact factor of researchers and research centers using the RC algorithm.

    PubMed

    Cordero-Villafáfila, Amelia; Ramos-Brieva, Jesus A

    2015-01-01

    The RC algorithm quantitatively evaluates the personal impact factor of the scientific production of isolated researchers. The authors propose an adaptation of RC to evaluate the personal impact factor of research centers, hospitals and other research groups. Thus, these could be classified according to the accredited impact of the results of their scientific work between researchers of the same scientific area. This could be useful for channelling budgets and grants for research. Copyright © 2013 SEP y SEPB. Published by Elsevier España. All rights reserved.

  6. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1988-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic and algorithmic needs. Both of these needs could be met using a general purpose workstation running both symbolic and algorithmic codes, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed by the NASA Ames Research Center in conjunction with the Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. The integration options and several possible solutions are presented.

  7. Preliminary Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)

    NASA Technical Reports Server (NTRS)

    Folta, David; Hawkins, Albin

    2001-01-01

    NASA's first autonomous formation flying mission is completing a primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center has implemented an autonomous universal three-axis formation flying algorithm in executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard design and presents the preliminary validation results of this unique system. Results from functionality assessment and autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a stand-alone algorithm.

  8. The center for causal discovery of biomedical knowledge from big data

    PubMed Central

    Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard

    2015-01-01

    The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. PMID:26138794

  9. An Approach to Data Center-Based KDD of Remote Sensing Datasets

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Mack, Robert; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The data explosion in remote sensing is straining the ability of data centers to deliver the data to the user community, yet many large-volume users actually seek a relatively small information component within the data, which they extract at their sites using Knowledge Discovery in Databases (KDD) techniques. To improve the efficiency of this process, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has implemented a KDD subsystem that supports execution of the user's KDD algorithm at the data center, dramatically reducing the volume that is sent to the user. The data are extracted from the archive in a planned, organized "campaign"; the algorithms are executed, and the output products sent to the users over the network. The first campaign, now complete, has resulted in overall reductions in shipped volume from 3.3 TB to 0.4 TB.

  10. Using ant colony optimization on the quadratic assignment problem to achieve low energy cost in geo-distributed data centers

    NASA Astrophysics Data System (ADS)

    Osei, Richard

    There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.

  11. MPLNET V3 Cloud and Planetary Boundary Layer Detection

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.

    2016-01-01

    The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.

  12. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.

    PubMed

    Schäfer, Dirk; Grass, Michael; van de Haar, Peter

    2011-07-01

    Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.

  13. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    PubMed Central

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  14. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  15. Imaging of downward-looking linear array SAR using three-dimensional spatial smoothing MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Siqian; Kuang, Gangyao

    2014-10-01

    In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.

  16. Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1977-01-01

    An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.

  17. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  18. A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.

    2002-01-01

    This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.

  19. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  1. Methods for mapping and monitoring global glaciovolcanism

    NASA Astrophysics Data System (ADS)

    Curtis, Aaron; Kyle, Philip

    2017-03-01

    The most deadly (Nevado del Ruiz, 1985) and the most costly (Eyjafjallajökull, 2010) eruptions of the last 100 years were both glaciovolcanic. Considering its great importance to studies of volcanic hazards, global climate, and even astrobiology, the global distribution of glaciovolcanism is insufficiently understood. We present and assess three algorithms for mapping, monitoring, and predicting likely centers of glaciovolcanic activity worldwide. Each algorithm intersects buffer zones representing known Holocene-active volcanic centers with existing datasets of snow, ice, and permafrost. Two detection algorithms, RGGA and PZGA, are simple spatial join operations computed from the Randolph Glacier Inventory and the Permafrost Zonation Index, respectively. The third, MDGA, is an algorithm run on all 15 available years of the MOD10A2 weekly snow cover product from the Terra MODIS satellite radiometer. Shortcomings and advantages of the three methods are discussed, including previously unreported blunders in the MOD10A2 dataset. Comparison of the results leads to an effective approach for integrating the three methods. We show that 20.4% of known Holocene volcanic centers host glaciers or areas of permanent snow. A further 10.9% potentially interact with permafrost. MDGA and PZGA do not rely on any human input, rendering them useful for investigations of change over time. An intermediate step in MDGA involves estimating the snow-covered area at every Holocene volcanic center. These estimations can be updated weekly with no human intervention. To investigate the feasibility of an automatic ice-loss alert system, we consider three examples of glaciovolcanism in the MDGA weekly dataset. We also discuss the potential use of PZGA to model past and future glaciovolcanism based on global circulation model outputs. Combined, the three algorithms provide an automated system for understanding the geographic and temporal patterns of global glaciovolcanism which should be of use for hazard assessment, the search for extreme microbiomes, climate models, and implementation of ice-cover-based volcano monitoring systems.

  2. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    PubMed

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  3. Single-scale center-surround Retinex based restoration of low-illumination images with edge enhancement

    NASA Astrophysics Data System (ADS)

    Kwok, Ngaiming; Shi, Haiyan; Peng, Yeping; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur

    2018-04-01

    Restoring images captured under low-illuminations is an essential front-end process for most image based applications. The Center-Surround Retinex algorithm has been a popular approach employed to improve image brightness. However, this algorithm in its basic form, is known to produce color degradations. In order to mitigate this problem, here the Single-Scale Retinex algorithm is modifid as an edge extractor while illumination is recovered through a non-linear intensity mapping stage. The derived edges are then integrated with the mapped image to produce the enhanced output. Furthermore, in reducing color distortion, the process is conducted in the magnitude sorted domain instead of the conventional Red-Green-Blue (RGB) color channels. Experimental results had shown that improvements with regard to mean brightness, colorfulness, saturation, and information content can be obtained.

  4. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Li, Weizhong

    2018-02-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  5. A Strategic Approach to Joint Officer Management: Analysis and Modeling Results

    DTIC Science & Technology

    2009-01-01

    rules. 5 Johnson and Wichern, 2002, p. 643. 6 Sullivan and Perry, 2004, p. 370. 7 Francesco Mola and Raffaele Miele, “Evolutionary Algorithms for...in Military Affairs, Newport, R.I.: Center for Naval Warfare Studies, 2003. Mola , Francesco, and Raffaele Miele, “Evolutionary Algorithms for

  6. NASA Acting Deputy Chief Technologist Briefed on Operation of Sonic Boom Prediction Algorithms

    NASA Image and Video Library

    2017-08-29

    NASA Acting Deputy Chief Technologist Vicki Crips being briefed by Tim Cox, Controls Engineer at NASA’s Armstrong Flight Research Center at Edwards, California, on the operation of the sonic boom prediction algorithms being used in engineering simulation for the NASA Supersonic Quest program.

  7. PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-05-01

    PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.

  8. A Survey of Parallel Sorting Algorithms.

    DTIC Science & Technology

    1981-12-01

    see that, in this algorithm, each Processor i, for 1 itp -2, interacts directly only with Processors i+l and i-l. Processor j 0 only interacts with...Chan76] Chandra, A.K., "Maximal Parallelism in Matrix Multiplication," IBM Report RC. 6193, Watson Research Center, Yorktown Heights, N.Y., October 1976

  9. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Steincamp, James; Taylor, Jaime

    2003-01-01

    A reduced surrogate, one point crossover genetic algorithm with random rank-based selection was used successfully to estimate the multiple phases of a segmented optical system modeled on the seven-mirror Systematic Image-Based Optical Alignment testbed located at NASA's Marshall Space Flight Center.

  10. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  11. Initial estimates of the temperature and fractional areas of fires at the World Trade Center Disaster from AVIRIS

    NASA Technical Reports Server (NTRS)

    Green, R. O.; Clark, R. N.; Boardman, J.; Pavri, B.; Sarture, C.

    2003-01-01

    This paper reports the measurements, algorithms, analyses, and results of the fire temperature and fractional area determinations with AVIRIS calibrated spectra at the World Trade Center site in September 2001.

  12. Experiments with Tropical Cyclone Wave and Intensity Forecasts

    DTIC Science & Technology

    2008-09-30

    algorithm In collaboration with Paul Wittmann (Fleet Numerical Metorology and Oceanography Center) and Hendrik Tolman (National Centers for...Wittmann, P.A., C Sampson and H. Tolman: 2006: Wave Analysis Guidance for Tropical Cyclone Forecast Advisories. 9th International Workshop on Wave

  13. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  14. Gauge properties of the guiding center variational symplectic integrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Squire, J.; Tang, W. M.; Qin, H.

    Variational symplectic algorithms have recently been developed for carrying out long-time simulation of charged particles in magnetic fields [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008); H. Qin, X. Guan, and W. Tang, Phys. Plasmas (2009); J. Li, H. Qin, Z. Pu, L. Xie, and S. Fu, Phys. Plasmas 18, 052902 (2011)]. As a direct consequence of their derivation from a discrete variational principle, these algorithms have very good long-time energy conservation, as well as exactly preserving discrete momenta. We present stability results for these algorithms, focusing on understanding how explicit variational integrators can be designed formore » this type of system. It is found that for explicit algorithms, an instability arises because the discrete symplectic structure does not become the continuous structure in the t{yields}0 limit. We examine how a generalized gauge transformation can be used to put the Lagrangian in the 'antisymmetric discretization gauge,' in which the discrete symplectic structure has the correct form, thus eliminating the numerical instability. Finally, it is noted that the variational guiding center algorithms are not electromagnetically gauge invariant. By designing a model discrete Lagrangian, we show that the algorithms are approximately gauge invariant as long as A and {phi} are relatively smooth. A gauge invariant discrete Lagrangian is very important in a variational particle-in-cell algorithm where it ensures current continuity and preservation of Gauss's law [J. Squire, H. Qin, and W. Tang (to be published)].« less

  15. Voronoi Based Nanocrystalline Generation Algorithm for Atomistic Simulations

    DTIC Science & Technology

    2016-12-22

    the  time  for reviewing instructions, searching existing data sources, gathering and maintaining the  data needed, and completing and reviewing the...taken when generating nanocrystals (left to right): populating cell with grain centers, sphere of atoms with defined crystal structure centered at...nanocrystals (left to right): populating cell with grain centers, sphere of atoms with defined crystal structure centered at each grain center, identifying atoms

  16. Reducing the time requirement of k-means algorithm.

    PubMed

    Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou

    2012-01-01

    Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data.

  17. Reducing the Time Requirement of k-Means Algorithm

    PubMed Central

    Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou

    2012-01-01

    Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space Rd and an integer k. The problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARIHA). We found that when k is close to d, the quality is good (ARIHA>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARIHA>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974

  18. Motor Control and Regulation for a Flywheel Energy Storage System

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara; Lyons, Valerie

    2003-01-01

    This talk will focus on the motor control algorithms used to regulate the flywheel system at the NASA Glenn Research Center. First a discussion of the inner loop torque control technique will be given. It is based on the principle of field orientation and is implemented without a position or speed sensor (sensorless control). Then the outer loop charge and discharge algorithm will be presented. This algorithm controls the acceleration of the flywheel during charging and the deceleration while discharging. The algorithm also allows the flywheel system to regulate the DC bus voltage during the discharge cycle.

  19. Flattening maps for the visualization of multibranched vessels.

    PubMed

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2005-02-01

    In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.

  20. Flattening Maps for the Visualization of Multibranched Vessels

    PubMed Central

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245

  1. ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)

    DTIC Science & Technology

    2018-01-18

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING

  2. Adaptive density trajectory cluster based on time and space distance

    NASA Astrophysics Data System (ADS)

    Liu, Fagui; Zhang, Zhijie

    2017-10-01

    There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.

  3. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Traditional expert systems, such as diagnostic and training systems, interact with users only through a keyboard and screen, and are usually symbolic in nature. Expert systems that require access to data bases, complex simulations and real-time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general purpose workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The latter approach was chosen to implement TEXSYS, the thermal expert system, developed by NASA Ames Research Center in conjunction with Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. This paper will explore the integration options, and present several possible solutions.

  4. Offshore Wind Measurements Using Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.

    2014-01-01

    The latest flight demonstration of Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center (LaRC) is presented. The goal of the campaign was to demonstrate the improvement of DAWN system since the previous flight campaign in 2012 and the capabilities of DAWN and the latest airborne wind profiling algorithm APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) developed at LaRC. The comparisons of APOLO and another algorithm are discussed utilizing two and five line-of-sights (LOSs), respectively. Wind parameters from DAWN were compared with ground-based radar measurements for validation purposes. The campaign period was June - July in 2013 and the flight altitude was 8 km in inland toward Charlotte, NC, and offshores in Virginia Beach, VA and Ocean City, MD. The DAWN system was integrated into a UC12B with two operators onboard during the campaign.

  5. Offshore wind measurements using Doppler aerosol wind lidar (DAWN) at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.

    2014-06-01

    The latest flight demonstration of Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center (LaRC) is presented. The goal of the campaign was to demonstrate the improvement of DAWN system since the previous flight campaign in 2012 and the capabilities of DAWN and the latest airborne wind profiling algorithm APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) developed at LaRC. The comparisons of APOLO and another algorithm are discussed utilizing two and five line-of-sights (LOSs), respectively. Wind parameters from DAWN were compared with ground-based radar measurements for validation purposes. The campaign period was June - July in 2013 and the flight altitude was 8 km in inland toward Charlotte, NC, and offshores in Virginia Beach, VA and Ocean City, MD. The DAWN system was integrated into a UC12B with two operators onboard during the campaign.

  6. The Kepler Science Operations Center Pipeline Framework Extensions

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.; hide

    2010-01-01

    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.

  7. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  8. A 2D eye gaze estimation system with low-resolution webcam images

    NASA Astrophysics Data System (ADS)

    Ince, Ibrahim Furkan; Kim, Jin Woo

    2011-12-01

    In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  9. Approximation algorithm for the problem of partitioning a sequence into clusters

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.

    2017-08-01

    We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.

  10. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugimoto, S; Inoue, T; Kurokawa, C

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less

  11. Research of centroiding algorithms for extended and elongated spot of sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Shao, Yayun; Zhang, Yudong; Wei, Kai

    2016-10-01

    Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.

  12. A fast, parallel algorithm for distant-dependent calculation of crystal properties

    NASA Astrophysics Data System (ADS)

    Stein, Matthew

    2017-12-01

    A fast, parallel algorithm for distant-dependent calculation and simulation of crystal properties is presented along with speedup results and methods of application. An illustrative example is used to compute the Lennard-Jones lattice constants up to 32 significant figures for 4 ≤ p ≤ 30 in the simple cubic, face-centered cubic, body-centered cubic, hexagonal-close-pack, and diamond lattices. In most cases, the known precision of these constants is more than doubled, and in some cases, corrected from previously published figures. The tools and strategies to make this computation possible are detailed along with application to other potentials, including those that model defects.

  13. Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang

    2016-10-01

    Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.

  14. Degenerate variational integrators for magnetic field line flow and guiding center trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. L.; Finn, J. M.; Burby, J. W.; Kraus, M.; Qin, H.; Tang, W. M.

    2018-05-01

    Symplectic integrators offer many benefits for numerically approximating solutions to Hamiltonian differential equations, including bounded energy error and the preservation of invariant sets. Two important Hamiltonian systems encountered in plasma physics—the flow of magnetic field lines and the guiding center motion of magnetized charged particles—resist symplectic integration by conventional means because the dynamics are most naturally formulated in non-canonical coordinates. New algorithms were recently developed using the variational integration formalism; however, those integrators were found to admit parasitic mode instabilities due to their multistep character. This work eliminates the multistep character, and therefore the parasitic mode instabilities via an adaptation of the variational integration formalism that we deem "degenerate variational integration." Both the magnetic field line and guiding center Lagrangians are degenerate in the sense that the resultant Euler-Lagrange equations are systems of first-order ordinary differential equations. We show that retaining the same degree of degeneracy when constructing discrete Lagrangians yields one-step variational integrators preserving a non-canonical symplectic structure. Numerical examples demonstrate the benefits of the new algorithms, including superior stability relative to the existing variational integrators for these systems and superior qualitative behavior relative to non-conservative algorithms.

  15. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, Dirk; Grass, Michael; Haar, Peter van de

    2011-05-15

    Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting andmore » back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. Conclusions: The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.« less

  16. Recognition of disturbances with specified morphology in time series. Part 1: Spikes on magnetograms of the worldwide INTERMAGNET network

    NASA Astrophysics Data System (ADS)

    Bogoutdinov, Sh. R.; Gvishiani, A. D.; Agayan, S. M.; Solovyev, A. A.; Kin, E.

    2010-11-01

    The International Real-time Magnetic Observatory Network (INTERMAGNET) is the world's biggest international network of ground-based observatories, providing geomagnetic data almost in real time (within 72 hours of collection) [Kerridge, 2001]. The observation data are rapidly transferred by the observatories participating in the program to regional Geomagnetic Information Nodes (GINs), which carry out a global exchange of data and process the results. The observations of the main (core) magnetic field of the Earth and its study are one of the key problems of geophysics. The INTERMAGNET system is the basis of monitoring the state of the Earth's magnetic field; therefore, the information provided by the system is required to be very reliable. Despite the rigid high-quality standard of the recording devices, they are subject to external effects that affect the quality of the records. Therefore, an objective and formalized recognition with the subsequent remedy of the anomalies (artifacts) that occur on the records is an important task. Expanding on the ideas of Agayan [Agayan et al., 2005] and Gvishiani [Gvishiani et al., 2008a; 2008b], this paper suggests a new algorithm of automatic recognition of anomalies with specified morphology, capable of identifying both physically- and anthropogenically-derived spikes on the magnetograms. The algorithm is constructed using fuzzy logic and, as such, is highly adaptive and universal. The developed algorithmic system formalizes the work of the expert-interpreter in terms of artificial intelligence. This ensures identical processing of large data arrays, almost unattainable manually. Besides the algorithm, the paper also reports on the application of the developed algorithmic system for identifying spikes at the INTERMAGNET observatories. The main achievement of the work is the creation of an algorithm permitting the almost unmanned extraction of spike-free (definitive) magnetograms from preliminary records. This automated system is developed for the first time with the application of fuzzy logic system for geomagnetic measurements. It is important to note that the recognition of time disturbances is formalized and identical. The algorithm presented here appreciably increases the reliability of spike-free INTERMAGNET magnetograms, thus increasing the objectivity of our knowledge of the Earth's magnetic field. At the same time, the created system can accomplish identical, formalized, and retrospective analysis of large archives of digital and digitized magnetograms, accumulated in the system of Worldwide Data Centers. The relevant project has already been initiated as a collaborative initiative of the Worldwide Data Center at Geophysical Center (Russian Academy of Sciences) and the NOAA National Geophysical Data Center (Unite States). Thus, by improving and adding objectivity to both new and historical initial data, the developed algorithmic system may contribute appreciably to improving our understanding of the Earth's magnetic field.

  17. Theoretical Bounds of Direct Binary Search Halftoning.

    PubMed

    Liao, Jan-Ray

    2015-11-01

    Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.

  18. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  19. Aging Aircraft 2005, The Joint NASA/FAA/DOD Conference on Aging Aircraft, Decision algorithms for Electrical Wiring Interconnect Systems (EWIS)Fault Detection

    DTIC Science & Technology

    2005-02-03

    Aging Aircraft 2005 The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft Decision Algorithms for Electrical Wiring Interconnect Systems (EWIS...SUBTITLE Aging Aircraft 2005, The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft, Decision algorithms for Electrical Wiring Interconnect...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NASA Langley Research Center, 8W. Taylor St., M/S 190 Hampton, VA 23681 and NAVAIR

  20. Research on the precise positioning of customers in large data environment

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; He, Lili

    2018-04-01

    Customer positioning has always been a problem that enterprises focus on. In this paper, FCM clustering algorithm is used to cluster customer groups. However, due to the traditional FCM clustering algorithm, which is susceptible to the influence of the initial clustering center and easy to fall into the local optimal problem, the short board of FCM is solved by the gray optimization algorithm (GWO) to achieve efficient and accurate handling of a large number of retailer data.

  1. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  2. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  3. Research to Operations: From Point Positions, Earthquake and Tsunami Modeling to GNSS-augmented Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    Stough, T.; Green, D. S.

    2017-12-01

    This collaborative research to operations demonstration brings together the data and algorithms from NASA research, technology, and applications-funded projects to deliver relevant data streams, algorithms, predictive models, and visualization tools to the NOAA National Tsunami Warning Center (NTWC) and Pacific Tsunami Warning Center (PTWC). Using real-time GNSS data and models in an operational environment, we will test and evaluate an augmented capability for tsunami early warning. Each of three research groups collect data from a selected network of real-time GNSS stations, exchange data consisting of independently processed 1 Hz station displacements, and merge the output into a single, more accurate and reliable set. The resulting merged data stream is delivered from three redundant locations to the TWCs with a latency of 5-10 seconds. Data from a number of seismogeodetic stations with collocated GPS and accelerometer instruments are processed for displacements and seismic velocities and also delivered. Algorithms for locating and determining the magnitude of earthquakes as well as algorithms that compute the source function of a potential tsunami using this new data stream are included in the demonstration. The delivered data, algorithms, models and tools are hosted on NOAA-operated machines at both warning centers, and, once tested, the results will be evaluated for utility in improving the speed and accuracy of tsunami warnings. This collaboration has the potential to dramatically improve the speed and accuracy of the TWCs local tsunami information over the current seismometer-only based methods. In our first year of this work, we have established and deployed an architecture for data movement and algorithm installation at the TWC's. We are addressing data quality issues and porting algorithms into the TWCs operating environment. Our initial module deliveries will focus on estimating moment magnitude (Mw) from Peak Ground Displacement (PGD), within 2-3 minutes of the event, and coseismic displacements converging to static offsets. We will also develop visualizations of module outputs tailored to the operational environment. In the context of this work, we will also discuss this research to operations approach and other opportunities within the NASA Applied Science Disaster Program.

  4. Mission-Centered Network Models: Defending Mission-Critical Tasks From Deception

    DTIC Science & Technology

    2015-09-29

    celebrities ). In military applications, networked operations offer an effective way to reduce the footprint of a force, but become a center of gravity...from,-used-by-trust-algorithms-to-assess-quality-and- trustworthiness - •  Technical&challenge:-Developing-standard-representa3ons-for-provenance-that

  5. Rationale for the Diabetic Retinopathy Clinical Research Network Treatment Protocol for Center-involved Diabetic Macular Edema

    PubMed Central

    Aiello, Lloyd Paul; Beck, Roy W; Bressler, Neil M.; Browning, David J.; Chalam, KV; Davis, Matthew; Ferris, Frederick L; Glassman, Adam; Maturi, Raj; Stockdale, Cynthia R.; Topping, Trexler

    2011-01-01

    Objective Describe the underlying principles used to develop a web-based algorithm that incorporated intravitreal anti-vascular endothelial growth factor (anti-VEGF) treatment for diabetic macular edema (DME) in a Diabetic Retinopathy Clinical Research Network (DRCR.net) randomized clinical trial. Design Discussion of treatment protocol for DME. Participants Subjects with vision loss from DME involving the center of the macula. Methods The DRCR.net created an algorithm incorporating anti-VEGF injections in a comparative effectiveness randomized clinical trial evaluating intravitreal ranibizumab with prompt or deferred (≥24 weeks) focal/grid laser in eyes with vision loss from center-involved DME. Results confirmed that intravitreal ranibizumab with prompt or deferred laser provides superior visual acuity outcomes, compared with prompt laser alone through at least 2 years. Duplication of this algorithm may not be practical for clinical practice. In order to share their opinion on how ophthalmologists might emulate the study protocol, participating DRCR.net investigators developed guidelines based on the algorithm's underlying rationale. Main Outcome Measures Clinical guidelines based on a DRCR.net protocol. Results The treatment protocol required real time feedback from a web-based data entry system for intravitreal injections, focal/grid laser, and follow-up intervals. Guidance from this system indicated whether treatment was required or given at investigator discretion and when follow-up should be scheduled. Clinical treatment guidelines, based on the underlying clinical rationale of the DRCR.net protocol, include repeating treatment monthly as long as there is improvement in edema compared with the previous month, or until the retina is no longer thickened. If thickening recurs or worsens after discontinuing treatment, treatment is resumed. Conclusions Duplication of the approach used in the DRCR.net randomized clinical trial to treat DME involving the center of the macula with intravitreal ranibizumab may not be practical in clinical practice, but likely can be emulated based on an understanding of the underlying rationale for the study protocol. Inherent differences between a web-based treatment algorithm and a clinical approach may lead to differences in outcomes that are impossible to predict. The closer the clinical approach is to the algorithm used in the study, the more likely the outcomes will be similar to those published. PMID:22136692

  6. Rationale for the diabetic retinopathy clinical research network treatment protocol for center-involved diabetic macular edema.

    PubMed

    Aiello, Lloyd Paul; Beck, Roy W; Bressler, Neil M; Browning, David J; Chalam, K V; Davis, Matthew; Ferris, Frederick L; Glassman, Adam R; Maturi, Raj K; Stockdale, Cynthia R; Topping, Trexler M

    2011-12-01

    To describe the underlying principles used to develop a web-based algorithm that incorporated intravitreal anti-vascular endothelial growth factor (anti-VEGF) treatment for diabetic macular edema (DME) in a Diabetic Retinopathy Clinical Research Network (DRCR.net) randomized clinical trial. Discussion of treatment protocol for DME. Subjects with vision loss resulting from DME involving the center of the macula. The DRCR.net created an algorithm incorporating anti-VEGF injections in a comparative effectiveness randomized clinical trial evaluating intravitreal ranibizumab with prompt or deferred (≥24 weeks) focal/grid laser treatment in eyes with vision loss resulting from center-involved DME. Results confirmed that intravitreal ranibizumab with prompt or deferred laser provides superior visual acuity outcomes compared with prompt laser alone through at least 2 years. Duplication of this algorithm may not be practical for clinical practice. To share their opinion on how ophthalmologists might emulate the study protocol, participating DRCR.net investigators developed guidelines based on the algorithm's underlying rationale. Clinical guidelines based on a DRCR.net protocol. The treatment protocol required real-time feedback from a web-based data entry system for intravitreal injections, focal/grid laser treatment, and follow-up intervals. Guidance from this system indicated whether treatment was required or given at investigator discretion and when follow-up should be scheduled. Clinical treatment guidelines, based on the underlying clinical rationale of the DRCR.net protocol, include repeating treatment monthly as long as there is improvement in edema compared with the previous month or until the retina is no longer thickened. If thickening recurs or worsens after discontinuing treatment, treatment is resumed. Duplication of the approach used in the DRCR.net randomized clinical trial to treat DME involving the center of the macula with intravitreal ranibizumab may not be practical in clinical practice, but likely can be emulated based on an understanding of the underlying rationale for the study protocol. Inherent differences between a web-based treatment algorithm and a clinical approach may lead to differences in outcomes that are impossible to predict. The closer the clinical approach is to the algorithm used in the study, the more likely the outcomes will be similar to those published. Proprietary or commercial disclosure may be found after the references. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  7. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  8. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  9. Managing Returns in a Catalog Distribution Center

    ERIC Educational Resources Information Center

    Gates, Joyce; Stuart, Julie Ann; Bonawi-tan, Winston; Loehr, Sarah

    2004-01-01

    The research team of the Purdue University in the United States developed an algorithm that considers several different factors, in addition to cost, to help catalog distribution centers process their returns more efficiently. A case study to teach the students important concepts involved in developing a solution to the returns disposition problem…

  10. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  11. Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing

    NASA Astrophysics Data System (ADS)

    Suma, T.; Murugesan, R.

    2018-04-01

    The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.

  12. High order methods for the integration of the Bateman equations and other problems of the form of y‧ = F(y,t)y

    NASA Astrophysics Data System (ADS)

    Josey, C.; Forget, B.; Smith, K.

    2017-12-01

    This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.

  13. Global satellite composites - 20 years of evolution

    NASA Astrophysics Data System (ADS)

    Kohrs, Richard A.; Lazzara, Matthew A.; Robaidek, Jerrold O.; Santek, David A.; Knuth, Shelley L.

    2014-01-01

    For two decades, the University of Wisconsin Space Science and Engineering Center (SSEC) and the Antarctic Meteorological Research Center (AMRC) have been creating global, regional and hemispheric satellite composites. These composites have proven useful in research, operational forecasting, commercial applications and educational outreach. Using the Man computer Interactive Data System (McIDAS) software developed at SSEC, infrared window composites were created by combining Geostationary Operational Environmental Satellite (GOES), and polar orbiting data from the SSEC Data Center and polar data acquired at McMurdo and Palmer stations, Antarctica. Increased computer processing speed has allowed for more advanced algorithms to address the decision making process for co-located pixels. The algorithms have evolved from a simplistic maximum brightness temperature to those that account for distance from the sub-satellite point, parallax displacement, pixel time and resolution. The composites are the state-of-the-art means for merging/mosaicking satellite imagery.

  14. Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.

    PubMed

    Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L

    2002-01-01

    From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.

  15. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  16. Multispectral autofluorescence diagnosis of non-melanoma cutaneous tumors

    NASA Astrophysics Data System (ADS)

    Borisova, Ekaterina; Dogandjiiska, Daniela; Bliznakova, Irina; Avramov, Latchezar; Pavlova, Elmira; Troyanova, Petranka

    2009-07-01

    Fluorescent analysis of basal cell carcinoma (BCC), squamous cell carcinoma (SCC), keratoacanthoma and benign cutaneous lesions is carried out under initial phase of clinical trial in the National Oncological Center - Sofia. Excitation sources with maximum of emission at 365, 380, 405, 450 and 630 nm are applied for better differentiation between nonmelanoma malignant cutaneous lesions fluorescence and spectral discrimination from the benign pathologies. Major spectral features are addressed and diagnostic discrimination algorithms based on lesions' emission properties are proposed. The diagnostic algorithms and evaluation procedures found will be applied for development of an optical biopsy clinical system for skin cancer detection in the frames of National Oncological Center and other university hospital dermatological departments in our country.

  17. Intra-organizational Computation and Complexity

    DTIC Science & Technology

    2003-01-01

    models. New methodologies, centered on understanding algorithmic complexity, are being developed that may enable us to better handle network data ...tractability of data analysis, and enable more precise theorization. A variety of measures of algorithmic complexity, e.g., Kolmogorov-Chaitin, and a...variety of proxies exist (which are often turned to for pragmatic reasons) ( Lempel and Ziv ,1976). For the most part, social and organizational

  18. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  19. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  20. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm.

    PubMed

    Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun

    2018-05-19

    Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

  1. A review on economic emission dispatch problems using quantum computational intelligence

    NASA Astrophysics Data System (ADS)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  2. Solar collector parameter identification from unsteady data by a discrete-gradient optimization algorithm

    NASA Technical Reports Server (NTRS)

    Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.

    1980-01-01

    A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.

  3. Leveraging Python Interoperability Tools to Improve Sapphire's Usability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gezahegne, A; Love, N S

    2007-12-10

    The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.

  4. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli B.; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  5. Baseline mathematics and geodetics for tracking operations

    NASA Technical Reports Server (NTRS)

    James, R.

    1981-01-01

    Various geodetic and mapping algorithms are analyzed as they apply to radar tracking systems and tested in extended BASIC computer language for real time computer applications. Closed-form approaches to the solution of converting Earth centered coordinates to latitude, longitude, and altitude are compared with classical approximations. A simplified approach to atmospheric refractivity called gradient refraction is compared with conventional ray tracing processes. An extremely detailed set of documentation which provides the theory, derivations, and application of algorithms used in the programs is included. Validation methods are also presented for testing the accuracy of the algorithms.

  6. A smartphone-based pain management app for adolescents with cancer: establishing system requirements and a pain care algorithm based on literature review, interviews, and consensus.

    PubMed

    Jibb, Lindsay A; Stevens, Bonnie J; Nathan, Paul C; Seto, Emily; Cafazzo, Joseph A; Stinson, Jennifer N

    2014-03-19

    Pain that occurs both within and outside of the hospital setting is a common and distressing problem for adolescents with cancer. The use of smartphone technology may facilitate rapid, in-the-moment pain support for this population. To ensure the best possible pain management advice is given, evidence-based and expert-vetted care algorithms and system design features, which are designed using user-centered methods, are required. To develop the decision algorithm and system requirements that will inform the pain management advice provided by a real-time smartphone-based pain management app for adolescents with cancer. A systematic approach to algorithm development and system design was utilized. Initially, a comprehensive literature review was undertaken to understand the current body of knowledge pertaining to pediatric cancer pain management. A user-centered approach to development was used as the results of the review were disseminated to 15 international experts (clinicians, scientists, and a consumer) in pediatric pain, pediatric oncology and mHealth design, who participated in a 2-day consensus conference. This conference used nominal group technique to develop consensus on important pain inputs, pain management advice, and system design requirements. Using data generated at the conference, a prototype algorithm was developed. Iterative qualitative testing was conducted with adolescents with cancer, as well as pediatric oncology and pain health care providers to vet and refine the developed algorithm and system requirements for the real-time smartphone app. The systematic literature review established the current state of research related to nonpharmacological pediatric cancer pain management. The 2-day consensus conference established which clinically important pain inputs by adolescents would require action (pain management advice) from the app, the appropriate advice the app should provide to adolescents in pain, and the functional requirements of the app. These results were used to build a detailed prototype algorithm capable of providing adolescents with pain management support based on their individual pain. Analysis of qualitative interviews with 9 multidisciplinary health care professionals and 10 adolescents resulted in 4 themes that helped to adapt the algorithm and requirements to the needs of adolescents. Specifically, themes were overall endorsement of the system, the need for a clinical expert, the need to individualize the system, and changes to the algorithm to improve potential clinical effectiveness. This study used a phased and user-centered approach to develop a pain management algorithm for adolescents with cancer and the system requirements of an associated app. The smartphone software is currently being created and subsequent work will focus on the usability, feasibility, and effectiveness testing of the app for adolescents with cancer pain.

  7. A Smartphone-Based Pain Management App for Adolescents With Cancer: Establishing System Requirements and a Pain Care Algorithm Based on Literature Review, Interviews, and Consensus

    PubMed Central

    Stevens, Bonnie J; Nathan, Paul C; Seto, Emily; Cafazzo, Joseph A; Stinson, Jennifer N

    2014-01-01

    Background Pain that occurs both within and outside of the hospital setting is a common and distressing problem for adolescents with cancer. The use of smartphone technology may facilitate rapid, in-the-moment pain support for this population. To ensure the best possible pain management advice is given, evidence-based and expert-vetted care algorithms and system design features, which are designed using user-centered methods, are required. Objective To develop the decision algorithm and system requirements that will inform the pain management advice provided by a real-time smartphone-based pain management app for adolescents with cancer. Methods A systematic approach to algorithm development and system design was utilized. Initially, a comprehensive literature review was undertaken to understand the current body of knowledge pertaining to pediatric cancer pain management. A user-centered approach to development was used as the results of the review were disseminated to 15 international experts (clinicians, scientists, and a consumer) in pediatric pain, pediatric oncology and mHealth design, who participated in a 2-day consensus conference. This conference used nominal group technique to develop consensus on important pain inputs, pain management advice, and system design requirements. Using data generated at the conference, a prototype algorithm was developed. Iterative qualitative testing was conducted with adolescents with cancer, as well as pediatric oncology and pain health care providers to vet and refine the developed algorithm and system requirements for the real-time smartphone app. Results The systematic literature review established the current state of research related to nonpharmacological pediatric cancer pain management. The 2-day consensus conference established which clinically important pain inputs by adolescents would require action (pain management advice) from the app, the appropriate advice the app should provide to adolescents in pain, and the functional requirements of the app. These results were used to build a detailed prototype algorithm capable of providing adolescents with pain management support based on their individual pain. Analysis of qualitative interviews with 9 multidisciplinary health care professionals and 10 adolescents resulted in 4 themes that helped to adapt the algorithm and requirements to the needs of adolescents. Specifically, themes were overall endorsement of the system, the need for a clinical expert, the need to individualize the system, and changes to the algorithm to improve potential clinical effectiveness. Conclusions This study used a phased and user-centered approach to develop a pain management algorithm for adolescents with cancer and the system requirements of an associated app. The smartphone software is currently being created and subsequent work will focus on the usability, feasibility, and effectiveness testing of the app for adolescents with cancer pain. PMID:24646454

  8. An Injury Severity-, Time Sensitivity-, and Predictability-Based Advanced Automatic Crash Notification Algorithm Improves Motor Vehicle Crash Occupant Triage.

    PubMed

    Stitzel, Joel D; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Schoell, Samantha L; Doud, Andrea N; Martin, R Shayn; Meredith, J Wayne

    2016-06-01

    Advanced Automatic Crash Notification algorithms use vehicle telemetry measurements to predict risk of serious motor vehicle crash injury. The objective of the study was to develop an Advanced Automatic Crash Notification algorithm to reduce response time, increase triage efficiency, and improve patient outcomes by minimizing undertriage (<5%) and overtriage (<50%), as recommended by the American College of Surgeons. A list of injuries associated with a patient's need for Level I/II trauma center treatment known as the Target Injury List was determined using an approach based on 3 facets of injury: severity, time sensitivity, and predictability. Multivariable logistic regression was used to predict an occupant's risk of sustaining an injury on the Target Injury List based on crash severity and restraint factors for occupants in the National Automotive Sampling System - Crashworthiness Data System 2000-2011. The Advanced Automatic Crash Notification algorithm was optimized and evaluated to minimize triage rates, per American College of Surgeons recommendations. The following rates were achieved: <50% overtriage and <5% undertriage in side impacts and 6% to 16% undertriage in other crash modes. Nationwide implementation of our algorithm is estimated to improve triage decisions for 44% of undertriaged and 38% of overtriaged occupants. Annually, this translates to more appropriate care for >2,700 seriously injured occupants and reduces unnecessary use of trauma center resources for >162,000 minimally injured occupants. The algorithm could be incorporated into vehicles to inform emergency personnel of recommended motor vehicle crash triage decisions. Lower under- and overtriage was achieved, and nationwide implementation of the algorithm would yield improved triage decision making for an estimated 165,000 occupants annually. Copyright © 2016. Published by Elsevier Inc.

  9. Optimizing Energy Consumption in Vehicular Sensor Networks by Clustering Using Fuzzy C-Means and Fuzzy Subtractive Algorithms

    NASA Astrophysics Data System (ADS)

    Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.

    2017-09-01

    Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.

  10. Adaptive Augmenting Control Flight Characterization Experiment on an F/A-18

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Gilligan, Eric T.; Wall, John H.; Orr, Jeb S.; Miller, Christopher J.; Hanson, Curtis E.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an Adaptive Augmenting Control (AAC) algorithm for launch vehicles that improves robustness and performance by adapting an otherwise welltuned classical control algorithm to unexpected environments or variations in vehicle dynamics. This AAC algorithm is currently part of the baseline design for the SLS Flight Control System (FCS), but prior to this series of research flights it was the only component of the autopilot design that had not been flight tested. The Space Launch System (SLS) flight software prototype, including the adaptive component, was recently tested on a piloted aircraft at Dryden Flight Research Center (DFRC) which has the capability to achieve a high level of dynamic similarity to a launch vehicle. Scenarios for the flight test campaign were designed specifically to evaluate the AAC algorithm to ensure that it is able to achieve the expected performance improvements with no adverse impacts in nominal or nearnominal scenarios. Having completed the recent series of flight characterization experiments on DFRC's F/A-18, the AAC algorithm's capability, robustness, and reproducibility, have been successfully demonstrated. Thus, the entire SLS control architecture has been successfully flight tested in a relevant environment. This has increased NASA's confidence that the autopilot design is ready to fly on the SLS Block I vehicle and will exceed the performance of previous architectures.

  11. Validation of the "HAMP" mapping algorithm: a tool for long-term trauma research studies in the conversion of AIS 2005 to AIS 98.

    PubMed

    Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard

    2011-07-01

    There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.

  12. TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)

    1995-01-01

    A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.

  13. Research on large spatial coordinate automatic measuring system based on multilateral method

    NASA Astrophysics Data System (ADS)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  14. The center for causal discovery of biomedical knowledge from big data.

    PubMed

    Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard

    2015-11-01

    The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. New descriptor for skeletons of planar shapes: the calypter

    NASA Astrophysics Data System (ADS)

    Pirard, Eric; Nivart, Jean-Francois

    1994-05-01

    The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.

  16. Quality Control Algorithms for the Kennedy Space Center 50-Megahertz Doppler Radar Wind Profiler Winds Database

    NASA Technical Reports Server (NTRS)

    Barbre, Robert E., Jr.

    2012-01-01

    This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown

  17. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  18. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    NASA Technical Reports Server (NTRS)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  19. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  20. Automatic speech recognition research at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.

    1977-01-01

    A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.

  1. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  2. A robust firearm identification algorithm of forensic ballistics specimens

    NASA Astrophysics Data System (ADS)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  3. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  4. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  5. Cloud classification from satellite data using a fuzzy sets algorithm: A polar example

    NASA Technical Reports Server (NTRS)

    Key, J. R.; Maslanik, J. A.; Barry, R. G.

    1988-01-01

    Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.

  6. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    PubMed Central

    Liu, Jingxian; Wu, Kefeng

    2017-01-01

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with traditional spectral clustering and fast affinity propagation clustering. Experimental results have illustrated its superior performance in terms of quantitative and qualitative evaluations. PMID:28777353

  7. A novel artificial immune algorithm for spatial clustering with obstacle constraint and its applications.

    PubMed

    Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji

    2014-01-01

    An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.

  8. Modified Polar-Format Software for Processing SAR Data

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2003-01-01

    HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.

  9. Multi-robot task allocation based on two dimensional artificial fish swarm algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Taixiong; Li, Xueqin; Yang, Liangyi

    2007-12-01

    The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.

  10. Highly Asynchronous VisitOr Queue Graph Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, R.

    2012-10-01

    HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.

  11. Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm

    NASA Astrophysics Data System (ADS)

    Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.

    2011-12-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.

  12. The Swiss Data Science Center on a mission to empower reproducible, traceable and reusable science

    NASA Astrophysics Data System (ADS)

    Schymanski, Stanislaus; Bouillet, Eric; Verscheure, Olivier

    2017-04-01

    Our abilities to collect, store and analyse scientific data have sky-rocketed in the past decades, but at the same time, a disconnect between data scientists, domain experts and data providers has begun to emerge. Data scientists are developing more and more powerful algorithms for data mining and analysis, while data providers are making more and more data publicly available, and yet many, if not most, discoveries are based on specific data and/or algorithms that "are available from the authors upon request". In the strong belief that scientific progress would be much faster if reproduction and re-use of such data and algorithms was made easier, the Swiss Data Science Center (SDSC) has committed to provide an open framework for the handling and tracking of scientific data and algorithms, from raw data and first principle equations to final data products and visualisations, modular simulation models and benchmark evaluation algorithms. Led jointly by EPFL and ETH Zurich, the SDSC is composed of a distributed multi-disciplinary team of data scientists and experts in select domains. The center aims to federate data providers, data and computer scientists, and subject-matter experts around a cutting-edge analytics platform offering user-friendly tooling and services to help with the adoption of Open Science, fostering research productivity and excellence. In this presentation, we will discuss our vision of a high-scalable open but secure community-based platform for sharing, accessing, exploring, and analyzing scientific data in easily reproducible workflows, augmented by automated provenance and impact tracking, knowledge graphs, fine-grained access right and digital right management, and a variety of domain-specific software tools. For maximum interoperability, transparency and ease of use, we plan to utilize notebook interfaces wherever possible, such as Apache Zeppelin and Jupyter. Feedback and suggestions from the audience will be gratefully considered.

  13. GLONASS orbit/clock combination in VNIIFTRI

    NASA Astrophysics Data System (ADS)

    Bezmenov, I.; Pasynok, S.

    2015-08-01

    An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.

  14. An Algorithm for Simple and Complex Feature Detection: From Retina to Primary Visual Cortex

    DTIC Science & Technology

    1993-02-01

    the thalamic lateral geniculate nucleus is available in Jones (1985) from which the following relevant details were extracted. The LGN receives...J.C.Horton. (1984). "Receptive field properties in the cat’s area 17 in the advance of on-center geniculate input." Journal of Neuroscience, 4, pp...center element LGN lateral geniculate nucleus of the thalamus 7XO thalamic sustained principal off-center element TXi thalamic sustained principal on

  15. Brief report: Comparison of methods to identify Iraq and Afghanistan war veterans using Department of Veterans Affairs administrative data.

    PubMed

    Bangerter, Ann; Gravely, Amy; Cutting, Andrea; Clothier, Barb; Spoont, Michele; Sayer, Nina

    2010-01-01

    The Department of Veterans Affairs (VA) has made treatment and care of Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF) veterans a priority. Researchers face challenges identifying the OIF/OEF population because until fiscal year 2008, no indicator of OIF/OEF service was present in the Veterans Health Administration (VHA) administrative databases typically used for research. In this article, we compare an algorithm we developed to identify OIF/OEF veterans using the Austin Information Technology Center administrative data with the VHA Support Service Center OIF/OEF Roster and veterans' self-report of military service. We drew data from two different institutional review board-approved funded studies. The positive predictive value of our algorithm compared with the VHA Support Service Center OIF/OEF Roster and self-report was 92% and 98%, respectively. However, this method of identifying OIF/OEF veterans failed to identify a large proportion of OIF/OEF veterans listed in the VHA Support Service Center OIF/OEF Roster. Demographic, diagnostic, and VA service use differences were found between veterans identified using our method and those we failed to identify but who were in the VHA Support Service Center OIF/OEF Roster. Therefore, depending on the research objective, this method may not be a viable alternative to the VHA Support Service Center OIF/OEF Roster for identifying OIF/OEF veterans.

  16. The Self-Directed Violence Classification System and the Columbia Classification Algorithm for Suicide Assessment: A Crosswalk

    ERIC Educational Resources Information Center

    Matarazzo, Bridget B.; Clemans, Tracy A.; Silverman, Morton M.; Brenner, Lisa A.

    2013-01-01

    The lack of a standardized nomenclature for suicide-related thoughts and behaviors prompted the Centers for Disease Control and Prevention, with the Veterans Integrated Service Network 19 Mental Illness Research Education and Clinical Center, to create the Self-Directed Violence Classification System (SDVCS). SDVCS has been adopted by the…

  17. Computational mechanics and physics at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr.

    1987-01-01

    An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.

  18. Robert Spencer | NREL

    Science.gov Websites

    & Simulation Research Interests Remote Sensing Natural Resource Modeling Machine Learning Education Analysis Center. Areas of Expertise Geospatial Analysis Data Visualization Algorithm Development Modeling

  19. MODIS Snow and Sea Ice Products

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Riggs, George A.; Salomonson, Vincent V.

    2004-01-01

    In this chapter, we describe the suite of Earth Observing System (EOS) Moderate-Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua snow and sea ice products. Global, daily products, developed at Goddard Space Flight Center, are archived and distributed through the National Snow and Ice Data Center at various resolutions and on different grids useful for different communities Snow products include binary snow cover, snow albedo, and in the near future, fraction of snow in a 5OO-m pixel. Sea ice products include ice extent determined with two different algorithms, and sea ice surface temperature. The algorithms used to develop these products are described. Both the snow and sea ice products, available since February 24,2000, are useful for modelers. Validation of the products is also discussed.

  20. Torsion effect of swing frame on the measurement of horizontal two-plane balancing machine

    NASA Astrophysics Data System (ADS)

    Wang, Qiuxiao; Wang, Dequan; He, Bin; Jiang, Pan; Wu, Zhaofu; Fu, Xiaoyan

    2017-03-01

    In this paper, the vibration model of swing frame of two-plane balancing machine is established to calculate the vibration center position of swing frame first. The torsional stiffness formula of spring plate twisting around the vibration center is then deduced by using superposition principle. Finally, the dynamic balancing experiments prove the irrationality of A-B-C algorithm which ignores the torsion effect, and show that the torsional stiffness deduced by experiments is consistent with the torsional stiffness calculated by theory. The experimental datas show the influence of the torsion effect of swing frame on the separation ratio of sided balancing machines, which reveals the sources of measurement error and assesses the application scope of A-B-C algorithm.

  1. Real-time automated failure analysis for on-orbit operations

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.

  2. Three-dimensional automated choroidal volume assessment on standard spectral-domain optical coherence tomography and correlation with the level of diabetic macular edema.

    PubMed

    Gerendas, Bianca S; Waldstein, Sebastian M; Simader, Christian; Deak, Gabor; Hajnajeeb, Bilal; Zhang, Li; Bogunovic, Hrvoje; Abramoff, Michael D; Kundi, Michael; Sonka, Milan; Schmidt-Erfurth, Ursula

    2014-11-01

    To measure choroidal thickness on spectral-domain optical coherence tomography (SD OCT) images using automated algorithms and to correlate choroidal pathology with retinal changes attributable to diabetic macular edema (DME). Post hoc analysis of multicenter clinical trial baseline data. SD OCT raster scans/fluorescein angiograms were obtained from 284 treatment-naïve eyes of 142 patients with clinically significant DME and from 20 controls. Three-dimensional (3D) SD OCT images were evaluated by a certified independent reading center analyzing retinal changes associated with diabetic retinopathy. Choroidal thicknesses were analyzed using a fully automated algorithm. Angiograms were assessed manually. Multiple endpoint correction according to Bonferroni-Holm was applied. Main outcome measures were average retinal/choroidal thickness on fovea-centered or peak of edema (thickest point of edema)-centered Early Treatment Diabetic Retinopathy Study grid, maximum area of leakage, and the correlation between retinal and choroidal thicknesses. Total choroidal thickness is significantly reduced in DME (175 ± 23 μm; P = .0016) and nonedematous fellow eyes (177 ± 20 μm; P = .009) of patients compared with healthy control eyes (190 ± 23 μm). Retinal/choroidal thickness values showed no significant correlation (1-mm: P = .27, r(2) = 0.01; 3-mm: P = .96, r(2) < 0.0001; 6-mm: P = .42, r(2) = 0.006). No significant difference was found in the 1- or 3-mm circle of a retinal peak of edema-centered grid. All other measurements of choroidal/retinal thickness (DME vs healthy, DME vs peak of edema-centered, DME vs fellow, healthy vs fellow, peak of edema-centered vs healthy, peak of edema-centered vs fellow eyes) were compared but no statistically significant correlation was found. By tendency a thinner choroid correlates with larger retinal leakage areas. Automated algorithms can be used to reliably assess choroidal thickness in eyes with DME. Choroidal thickness was generally reduced in patients with diabetes if DME is present in 1 eye; however, no correlation was found between choroidal/retinal pathologies, suggesting different pathogenetic pathways. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  4. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  5. A Novel Center Star Multiple Sequence Alignment Algorithm Based on Affine Gap Penalty and K-Band

    NASA Astrophysics Data System (ADS)

    Zou, Quan; Shan, Xiao; Jiang, Yi

    Multiple sequence alignment is one of the most important topics in computational biology, but it cannot deal with the large data so far. As the development of copy-number variant(CNV) and Single Nucleotide Polymorphisms(SNP) research, many researchers want to align numbers of similar sequences for detecting CNV and SNP. In this paper, we propose a novel multiple sequence alignment algorithm based on affine gap penalty and k-band. It can align more quickly and accurately, that will be helpful for mining CNV and SNP. Experiments prove the performance of our algorithm.

  6. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  7. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  8. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  9. On Algorithms for Nonlinear Minimax and Min-Max-Min Problems and Their Efficiency

    DTIC Science & Technology

    2011-03-01

    dissertation is complete, I can finally stay home after dinner to play Wii with you. LET’S GO Mario and Yellow Mushroom... xv THIS PAGE INTENTIONALLY... balance the accuracy of the approximation with problem ill-conditioning. The sim- plest smoothing algorithm creates an accurate smooth approximating...sizing in electronic circuit boards (Chen & Fan, 1998), obstacle avoidance for robots (Kirjner- Neto & Polak, 1998), optimal design centering

  10. Evolvable Hardware for Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William

    2004-01-01

    This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.

  11. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  12. High Precision Edge Detection Algorithm for Mechanical Parts

    NASA Astrophysics Data System (ADS)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  13. An Autonomous Navigation Algorithm for High Orbit Satellite Using Star Sensor and Ultraviolet Earth Sensor

    PubMed Central

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261

  14. Resistance Training Exercise Program for Intervention to Enhance Gait Function in Elderly Chronically Ill Patients: Multivariate Multiscale Entropy for Center of Pressure Signal Analysis

    PubMed Central

    Jiang, Bernard C.

    2014-01-01

    Falls are unpredictable accidents, and the resulting injuries can be serious in the elderly, particularly those with chronic diseases. Regular exercise is recommended to prevent and treat hypertension and other chronic diseases by reducing clinical blood pressure. The “complexity index” (CI), based on multiscale entropy (MSE) algorithm, has been applied in recent studies to show a person's adaptability to intrinsic and external perturbations and widely used measure of postural sway or stability. The multivariate multiscale entropy (MMSE) was advanced algorithm used to calculate the complexity index (CI) values of the center of pressure (COP) data. In this study, we applied the MSE & MMSE to analyze gait function of 24 elderly, chronically ill patients (44% female; 56% male; mean age, 67.56 ± 10.70 years) with either cardiovascular disease, diabetes mellitus, or osteoporosis. After a 12-week training program, postural stability measurements showed significant improvements. Our results showed beneficial effects of resistance training, which can be used to improve postural stability in the elderly and indicated that MMSE algorithms to calculate CI of the COP data were superior to the multiscale entropy (MSE) algorithm to identify the sense of balance in the elderly. PMID:25295070

  15. An autonomous navigation algorithm for high orbit satellite using star sensor and ultraviolet earth sensor.

    PubMed

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.

  16. A New Cell-Centered Implicit Numerical Scheme for Ions in the 2-D Axisymmetric Code Hall2de

    NASA Technical Reports Server (NTRS)

    Lopez Ortega, Alejandro; Mikellides, Ioannis G.

    2014-01-01

    We present a new algorithm in the Hall2De code to simulate the ion hydrodynamics in the acceleration channel and near plume regions of Hall-effect thrusters. This implementation constitutes an upgrade of the capabilities built in the Hall2De code. The equations of mass conservation and momentum for unmagnetized ions are solved using a conservative, finite-volume, cell-centered scheme on a magnetic-field-aligned grid. Major computational savings are achieved by making use of an implicit predictor/multi-corrector algorithm for time evolution. Inaccuracies in the prediction of the motion of low-energy ions in the near plume in hydrodynamics approaches are addressed by implementing a multi-fluid algorithm that tracks ions of different energies separately. A wide range of comparisons with measurements are performed to validate the new ion algorithms. Several numerical experiments with the location and value of the anomalous collision frequency are also presented. Differences in the plasma properties in the near-plume between the single fluid and multi-fluid approaches are discussed. We complete our validation by comparing predicted erosion rates at the channel walls of the thruster with measurements. Erosion rates predicted by the plasma properties obtained from simulations replicate accurately measured rates of erosion within the uncertainty range of the sputtering models employed.

  17. Progressive data transmission for anatomical landmark detection in a cloud.

    PubMed

    Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D

    2012-01-01

    In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.

  18. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

    PubMed Central

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-01-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598

  19. Doppler Radar Profiler for Launch Winds at the Kennedy Space Center (Phase 1a)

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.

    2011-01-01

    The NASA Engineering and Safety Center (NESC) received a request from the, NASA Technical Fellow for Flight Mechanics at Langley Research Center (LaRC), to develop a database from multiple Doppler radar wind profiler (DRWP) sources and develop data processing algorithms to construct high temporal resolution DRWP wind profiles for day-of-launch (DOL) vehicle assessment. This document contains the outcome of Phase 1a of the assessment including Findings, Observations, NESC Recommendations, and Lessons Learned.

  20. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental tool in numerous image processing and remote sensing applications. For example, unsupervised clustering is often used to obtain vegetation maps of an area of interest. This approach is useful when reliable training data are either scarce or expensive, and when relatively little a priori information about the data is available. Unsupervised clustering methods play a significant role in the pursuit of unsupervised classification. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points (or samples) in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute a set of cluster centers in d-space. Although there is no specific optimization criterion, the algorithm is similar in spirit to the well known k-means clustering method in which the objective is to minimize the average squared distance of each point to its nearest center, called the average distortion. One significant feature of ISOCLUS over k-means is that clusters may be merged or split, and so the final number of clusters may be different from the number k supplied as part of the input. This algorithm will be described in later in this paper. The ISOCLUS algorithm can run very slowly, particularly on large data sets. Given its wide use in remote sensing, its efficient computation is an important goal. We have developed a fast implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm, the filtering algorithm, by Kanungo et al.. They showed that, by storing the data in a kd-tree, it was possible to significantly reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm. For technical reasons, which are explained later, it is necessary to make a minor modification to the ISOCLUS specification. We provide empirical evidence, on both synthetic and Landsat image data sets, that our algorithm's performance is essentially the same as that of ISOCLUS, but with significantly lower running times. We show that our algorithm runs from 3 to 30 times faster than a straightforward implementation of ISOCLUS. Our adaptation of the filtering algorithm involves the efficient computation of a number of cluster statistics that are needed for ISOCLUS, but not for k-means.

  1. Finding Snowmageddon: Detecting and quantifying northeastern U.S. snowstorms in a multi-decadal global climate ensemble

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.

    2017-12-01

    The northeastern coast of the United States is particularly vulnerable to impacts from extratropical cyclones during winter months, which produce heavy precipitation, high winds, and coastal flooding. These impacts are amplified by the proximity of major population centers to common storm tracks and include risks to health and welfare, massive transportation disruption, lost spending productivity, power outages, and structural damage. Historically, understanding regional snowfall in climate models has generally centered around seasonal mean climatologies even though major impacts typically occur at the scales of hours to days. To quantify discrete snowstorms at the event level, we describe a new objective detection algorithm for gridded data based on the Regional Snowfall Index (RSI) produced by NOAA's National Centers for Environmental Information. The algorithm uses 6-hourly precipitation to collocate storm-integrated snowfall with population density to produce a distribution of snowstorms with societally relevant impacts. The algorithm is tested on the Community Earth System Model (CESM) Large Ensemble Project (LENS) data. Present day distributions of snowfall events is well-replicated within the ensemble. We discuss classification sensitivities to assumptions made in determining precipitation phase and snow water equivalent. We also explore projected reductions in mid-century and end-of-century snowstorms due to changes in snowfall rates and precipitation phase, as well as highlight potential improvements in storm representation from refined horizontal resolution in model simulations.

  2. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  3. Medical physics staffing for radiation oncology: a decade of experience in Ontario, Canada.

    PubMed

    Battista, Jerry J; Clark, Brenda G; Patterson, Michael S; Beaulieu, Luc; Sharpe, Michael B; Schreiner, L John; MacPherson, Miller S; Van Dyk, Jacob

    2012-01-05

    The January 2010 articles in The New York Times generated intense focus on patient safety in radiation treatment, with physics staffing identified frequently as a critical factor for consistent quality assurance. The purpose of this work is to review our experience with medical physics staffing, and to propose a transparent and flexible staffing algorithm for general use. Guided by documented times required per routine procedure, we have developed a robust algorithm to estimate physics staffing needs according to center-specific workload for medical physicists and associated support staff, in a manner we believe is adaptable to an evolving radiotherapy practice. We calculate requirements for each staffing type based on caseload, equipment inventory, quality assurance, educational programs, and administration. Average per-case staffing ratios were also determined for larger-scale human resource planning and used to model staffing needs for Ontario, Canada over the next 10 years. The workload specific algorithm was tested through a survey of Canadian cancer centers. For center-specific human resource planning, we propose a grid of coefficients addressing specific workload factors for each staff group. For larger scale forecasting of human resource requirements, values of 260, 700, 300, 600, 1200, and 2000 treated cases per full-time equivalent (FTE) were determined for medical physicists, physics assistants, dosimetrists, electronics technologists, mechanical technologists, and information technology specialists, respectively.

  4. GSFC Technology Development Center Report

    NASA Technical Reports Server (NTRS)

    Himwich, Ed; Gipson, John

    2013-01-01

    This report summarizes the activities of the GSFC Technology Development Center (TDC) for 2012 and forecasts planned activities for 2013. The GSFC TDC develops station software including the Field System (FS), scheduling software (SKED), hardware including tools for station timing and meteorology, scheduling algorithms, and operational procedures. It provides a pool of individuals to assist with station implementation, check-out, upgrades, and training.

  5. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology transition goals.

  6. Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): An algorithm for MHz localization rates using standard CPUs

    NASA Astrophysics Data System (ADS)

    Martens, Koen J. A.; Bader, Arjen N.; Baas, Sander; Rieger, Bernd; Hohlbein, Johannes

    2018-03-01

    We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.

  7. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  8. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  9. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  10. A multi-dimensional nonlinearly implicit, electromagnetic Vlasov-Darwin particle-in-cell (PIC) algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; CoCoMans Team

    2014-10-01

    For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.

  11. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  12. Automated detection of sperm whale sounds as a function of abrupt changes in sound intensity

    NASA Astrophysics Data System (ADS)

    Walker, Christopher D.; Rayborn, Grayson H.; Brack, Benjamin A.; Kuczaj, Stan A.; Paulos, Robin L.

    2003-04-01

    An algorithm designed to detect abrupt changes in sound intensity was developed and used to identify and count sperm whale vocalizations and to measure boat noise. The algorithm is a MATLAB routine that counts the number of occurrences for which the change in intensity level exceeds a threshold. The algorithm also permits the setting of a ``dead time'' interval to prevent the counting of multiple pulses within a single sperm whale click. This algorithm was used to analyze digitally sampled recordings of ambient noise obtained from the Gulf of Mexico using near bottom mounted EARS buoys deployed as part of the Littoral Acoustic Demonstration Center experiment. Because the background in these data varied slowly, the result of the application of the algorithm was automated detection of sperm whale clicks and creaks with results that agreed well with those obtained by trained human listeners. [Research supported by ONR.

  13. An effective fuzzy kernel clustering analysis approach for gene expression data.

    PubMed

    Sun, Lin; Xu, Jiucheng; Yin, Jiaojiao

    2015-01-01

    Fuzzy clustering is an important tool for analyzing microarray data. A major problem in applying fuzzy clustering method to microarray gene expression data is the choice of parameters with cluster number and centers. This paper proposes a new approach to fuzzy kernel clustering analysis (FKCA) that identifies desired cluster number and obtains more steady results for gene expression data. First of all, to optimize characteristic differences and estimate optimal cluster number, Gaussian kernel function is introduced to improve spectrum analysis method (SAM). By combining subtractive clustering with max-min distance mean, maximum distance method (MDM) is proposed to determine cluster centers. Then, the corresponding steps of improved SAM (ISAM) and MDM are given respectively, whose superiority and stability are illustrated through performing experimental comparisons on gene expression data. Finally, by introducing ISAM and MDM into FKCA, an effective improved FKCA algorithm is proposed. Experimental results from public gene expression data and UCI database show that the proposed algorithms are feasible for cluster analysis, and the clustering accuracy is higher than the other related clustering algorithms.

  14. Analysis of Intergrade Variables In The Fuzzy C-Means And Improved Algorithm Cat Swarm Optimization(FCM-ISO) In Search Segmentation

    NASA Astrophysics Data System (ADS)

    Saragih, Jepronel; Salim Sitompul, Opim; Situmorang, Zakaria

    2017-12-01

    One of the techniques known in Data Mining namely clustering. Image segmentation process does not always represent the actual image which is caused by a combination of algorithms as long as it has not been able to obtain optimal cluster centers. In this research will search for the smallest error with the counting result of a Fuzzy C Means process optimized with Cat swam Algorithm Optimization that has been developed by adding the weight of the energy in the process of Tracing Mode.So with the parameter can be determined the most optimal cluster centers and most closely with the data will be made the cluster. Weigh inertia in this research, namely: (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), (0.7), (0.8) and (0.9). Then compare the results of each variable values inersia (W) which is different and taken the smallest results. Of this weighting analysis process can acquire the right produce inertia variable cost function the smallest.

  15. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  16. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  17. MODFLOW-2000, The U.S. Geological Survey Modular Ground-Water Model -- GMG Linear Equation Solver Package Documentation

    USGS Publications Warehouse

    Wilson, John D.; Naff, Richard L.

    2004-01-01

    A geometric multigrid solver (GMG), based in the preconditioned conjugate gradient algorithm, has been developed for solving systems of equations resulting from applying the cell-centered finite difference algorithm to flow in porous media. This solver has been adapted to the U.S. Geological Survey ground-water flow model MODFLOW-2000. The documentation herein is a description of the solver and the adaptation to MODFLOW-2000.

  18. A Centered Projective Algorithm for Linear Programming

    DTIC Science & Technology

    1988-02-01

    zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an

  19. Performance Analysis of the Probabilistic Multi-Hypothesis Tracking Algorithm on the SEABAR Data Sets

    DTIC Science & Technology

    2009-07-01

    Performance Analysis of the Probabilistic Multi- Hypothesis Tracking Algorithm On the SEABAR Data Sets Dr. Christian G . Hempel Naval...Hypothesis Tracking,” NUWC-NPT Technical Report 10,428, Naval Undersea Warfare Center Division, Newport, RI, 15 February 1995. [2] G . McLachlan, T...the 9th International Conference on Information Fusion, Florence Italy, July, 2006. [8] C. Hempel, “Track Initialization for Multi-Static Active Sonay

  20. Deductive Synthesis of the Unification Algorithm,

    DTIC Science & Technology

    1981-06-01

    DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem

  1. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  2. Leveraging Call Center Logs for Customer Behavior Prediction

    NASA Astrophysics Data System (ADS)

    Parvathy, Anju G.; Vasudevan, Bintu G.; Kumar, Abhishek; Balakrishnan, Rajesh

    Most major businesses use business process outsourcing for performing a process or a part of a process including financial services like mortgage processing, loan origination, finance and accounting and transaction processing. Call centers are used for the purpose of receiving and transmitting a large volume of requests through outbound and inbound calls to customers on behalf of a business. In this paper we deal specifically with the call centers notes from banks. Banks as financial institutions provide loans to non-financial businesses and individuals. Their call centers act as the nuclei of their client service operations and log the transactions between the customer and the bank. This crucial conversation or information can be exploited for predicting a customer’s behavior which will in turn help these businesses to decide on the next action to be taken. Thus the banks save considerable time and effort in tracking delinquent customers to ensure minimum subsequent defaulters. Majority of the time the call center notes are very concise and brief and often the notes are misspelled and use many domain specific acronyms. In this paper we introduce a novel domain specific spelling correction algorithm which corrects the misspelled words in the call center logs to meaningful ones. We also discuss a procedure that builds the behavioral history sequences for the customers by categorizing the logs into one of the predefined behavioral states. We then describe a pattern based predictive algorithm that uses temporal behavioral patterns mined from these sequences to predict the customer’s next behavioral state.

  3. Diabetes and Hypertension Quality Measurement in Four Safety-Net Sites

    PubMed Central

    Benkert, R.; Dennehy, P.; White, J.; Hamilton, A.; Tanner, C.

    2014-01-01

    Summary Background In this new era after the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009, the literature on lessons learned with electronic health record (EHR) implementation needs to be revisited. Objectives Our objective was to describe what implementation of a commercially available EHR with built-in quality query algorithms showed us about our care for diabetes and hypertension populations in four safety net clinics, specifically feasibility of data retrieval, measurements over time, quality of data, and how our teams used this data. Methods A cross-sectional study was conducted from October 2008 to October 2012 in four safety-net clinics located in the Midwest and Western United States. A data warehouse that stores data from across the U.S was utilized for data extraction from patients with diabetes or hypertension diagnoses and at least two office visits per year. Standard quality measures were collected over a period of two to four years. All sites were engaged in a partnership model with the IT staff and a shared learning process to enhance the use of the quality metrics. Results While use of the algorithms was feasible across sites, challenges occurred when attempting to use the query results for research purposes. There was wide variation of both process and outcome results by individual centers. Composite calculations balanced out the differences seen in the individual measures. Despite using consistent quality definitions, the differences across centers had an impact on numerators and denominators. All sites agreed to a partnership model of EHR implementation, and each center utilized the available resources of the partnership for Center-specific quality initiatives. Conclusions Utilizing a shared EHR, a Regional Extension Center-like partnership model, and similar quality query algorithms allowed safety-net clinics to benchmark and improve the quality of care across differing patient populations and health care delivery models. PMID:25298815

  4. New Paradigms for Patient-Centered Outcomes Research in Electronic Medical Records: An Example of Detecting Urinary Incontinence Following Prostatectomy.

    PubMed

    Hernandez-Boussard, Tina; Tamang, Suzanne; Blayney, Douglas; Brooks, Jim; Shah, Nigam

    2016-01-01

    National initiatives to develop quality metrics emphasize the need to include patient-centered outcomes. Patient-centered outcomes are complex, require documentation of patient communications, and have not been routinely collected by healthcare providers. The widespread implementation of electronic medical records (EHR) offers opportunities to assess patient-centered outcomes within the routine healthcare delivery system. The objective of this study was to test the feasibility and accuracy of identifying patient centered outcomes within the EHR. Data from patients with localized prostate cancer undergoing prostatectomy were used to develop and test algorithms to accurately identify patient-centered outcomes in post-operative EHRs - we used urinary incontinence as the use case. Standard data mining techniques were used to extract and annotate free text and structured data to assess urinary incontinence recorded within the EHRs. A total 5,349 prostate cancer patients were identified in our EHR-system between 1998-2013. Among these EHRs, 30.3% had a text mention of urinary incontinence within 90 days post-operative compared to less than 1.0% with a structured data field for urinary incontinence (i.e. ICD-9 code). Our workflow had good precision and recall for urinary incontinence (positive predictive value: 0.73 and sensitivity: 0.84). Our data indicate that important patient-centered outcomes, such as urinary incontinence, are being captured in EHRs as free text and highlight the long-standing importance of accurate clinician documentation. Standard data mining algorithms can accurately and efficiently identify these outcomes in existing EHRs; the complete assessment of these outcomes is essential to move practice into the patient-centered realm of healthcare.

  5. Normal pressure hydrocephalus: survey on contemporary diagnostic algorithms and therapeutic decision-making in clinical practice.

    PubMed

    Krauss, J K; Halve, B

    2004-04-01

    There is no agreement on the best diagnostic criteria for selecting patients with normal pressure hydrocephalus (NPH) for CSF shunting. The primary objective of the present study was to provide a contemporary survey on diagnostic algorithms and therapeutic decision-making in clinical practice. The secondary objective was to estimate the incidence of NPH. Standardized questionnaires with sections on the incidence of NPH and the frequency of shunting, evaluation of clinical symptoms, and signs, diagnostic studies, therapeutic decision-making and operative techniques, postoperative outcome and complications, and the profiles of different centers, were sent to 82 neurosurgical centers in Germany known to participate in the care of patients with NPH. Data were analyzed from 49 of 53 centers which responded to the survey (65%). The estimated annual incidence of NPH was 1.8 cases/100.000 inhabitants. Gait disturbance was defined as the most important sign of NPH (61%). There was a wide variety in the choice of diagnostic tests. Cisternography was performed routinely only in single centers. Diagnostic CSF removal was used with varying frequency by all centers except one, but the amount of CSF removed by lumbar puncture differed markedly between centers. There was poor agreement on criteria for evaluation of continuous intracranial pressure recordings regarding both the amplitude and the relative frequency of B-waves. Both periventricular and deep white matter lesions were present in about 50% of patients being shunted, indicating that vascular comorbidity in NPH patients has gained more acceptance. Programmable shunts were used by more than half of the centers, and newer valve types such as gravitational valves have become more popular. According to the present survey, new diagnostic and therapeutic concepts on NPH have penetrated daily routine to a certain extent. Wide variability, however, still exists among different neurosurgical centers.

  6. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  7. Value-added Data Services at the Goddard Earth Sciences Data and Information Services Center

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.; Alcott, Gary T.; Kempler, Steven J.; Lynnes, Christopher S.; Vollmer, Bruce E.

    2004-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), in addition to serving the Earth Science community as one of the major Distributed Active Archives Centers (DAACs), provides much more than just data. Among the value-added services available to general users are subsetting data spatially and/or by parameter, online analysis (to avoid downloading unnecessarily all the data), and assistance in obtaining data from other centers. Services available to data producers and high-volume users include consulting on building new products with standard formats and metadata and construction of data management systems. A particularly useful service is data processing at the DISC (i.e., close to the input data) with the users algorithm. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools. Partnerships between the GES DISC and scientists, both producers and users, allow the scientists to concentrate on science, while the GES DISC handles the data management, e.g., formats, integration, and data processing. The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from simple data support to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. At the same time, such partnerships allow the GES DISC to serve the user community more efficiently and to better prioritize on-line holdings. Several examples of successful partnerships are described in the presentation.

  8. Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion

    NASA Technical Reports Server (NTRS)

    Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.

  9. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  10. Home and Clinical Cardiovascular Care Center (H4C): a Framework for Integrating Body Sensor Networks and QTRU Cryptography System.

    PubMed

    Zakerolhosseini, Ali; Sokouti, Massoud; Pezeshkian, Massoud

    2013-01-01

    Quick responds to heart attack patients before arriving to hospital is a very important factor. In this paper, a combined model of Body Sensor Network and Personal Digital Access using QTRU cipher algorithm in Wifi networks is presented to efficiently overcome these life threatening attacks. The algorithm for optimizing the routing paths between sensor nodes and an algorithm for reducing the power consumption are also applied for achieving the best performance by this model. This system is consumes low power and has encrypting and decrypting processes. It also has an efficient routing path in a fast manner.

  11. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional computational fluid dynamic (CFD) flow solver with the Monte Carlo simulation of the pdf evolution equation was developed. The algorithm was validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames was carried out using this algorithm. Numerical results compared favorably with experimental data. The computations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  12. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed in this work. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional CFD flow solver with the Monte Carlo simulation of the pdf evolution equation has been developed. The algorithm has been validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames has been carried out using this algorithm. Numerical results compared favorably with experimental data. The computuations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  13. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  14. Software for universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    An overview is provided of the universal noiseless coding algorithms as well as their relationship to the now available FORTRAN implementations. It is suggested that readers considering investigating the utility of these algorithms for actual applications should consult both NASA's Computer Software Management and Information Center (COSMIC) and descriptions of coding techniques provided by Rice (1979). Examples of applying these techniques have also been given by Rice (1975, 1979, 1980). Attention is given to reversible preprocessing, general implementation instructions, naming conventions, and calling arguments. A general applicability of the considered algorithms to solving practical problems is obtained because most real data sources can be simply transformed into the required form by appropriate preprocessing.

  15. Home and Clinical Cardiovascular Care Center (H4C): a Framework for Integrating Body Sensor Networks and QTRU Cryptography System

    PubMed Central

    Zakerolhosseini, Ali; Sokouti, Massoud; Pezeshkian, Massoud

    2013-01-01

    Quick responds to heart attack patients before arriving to hospital is a very important factor. In this paper, a combined model of Body Sensor Network and Personal Digital Access using QTRU cipher algorithm in Wifi networks is presented to efficiently overcome these life threatening attacks. The algorithm for optimizing the routing paths between sensor nodes and an algorithm for reducing the power consumption are also applied for achieving the best performance by this model. This system is consumes low power and has encrypting and decrypting processes. It also has an efficient routing path in a fast manner. PMID:24252988

  16. NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.; Mcentire, K. J.

    1985-01-01

    The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.

  17. A comparison of force control algorithms for robots in contact with flexible environments

    NASA Technical Reports Server (NTRS)

    Wilfinger, Lee S.

    1992-01-01

    In order to perform useful tasks, the robot end-effector must come into contact with its environment. For such tasks, force feedback is frequently used to control the interaction forces. Control of these forces is complicated by the fact that the flexibility of the environment affects the stability of the force control algorithm. Because of the wide variety of different materials present in everyday environments, it is necessary to gain an understanding of how environmental flexibility affects the stability of force control algorithms. This report presents the theory and experimental results of two force control algorithms: Position Accommodation Control and Direct Force Servoing. The implementation of each of these algorithms on a two-arm robotic test bed located in the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) is discussed in detail. The behavior of each algorithm when contacting materials of different flexibility is experimentally determined. In addition, several robustness improvements to the Direct Force Servoing algorithm are suggested and experimentally verified. Finally, a qualitative comparison of the force control algorithms is provided, along with a description of a general tuning process for each control method.

  18. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  19. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  20. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  1. A Lightning Channel Retrieval Algorithm for the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, William; Arnold, James E. (Technical Monitor)

    2002-01-01

    A new multi-station VHF time-of-arrival (TOA) antenna network is, at the time of this writing, coming on-line in Northern Alabama. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The network will support on-going ground validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. It will also provide for many interesting and detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and will offer many interesting comparisons with other meteorological/geophysical wets associated with lightning and thunderstorms. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. In this study, a new revised channel mapping retrieval algorithm is introduced. The algorithm is an extension of earlier work provided in Koshak and Solakiewicz (1996) in the analysis of the NASA Kennedy Space Center (KSC) Lightning Detection and Ranging (LDAR) system. As in the 1996 study, direct algebraic solutions are obtained by inverting a simple linear system of equations, thereby making computer searches through a multi-dimensional parameter domain of a Chi-Squared function unnecessary. However, the new algorithm is developed completely in spherical Earth-centered coordinates (longitude, latitude, altitude), rather than in the (x, y, z) cartesian coordinates employed in the 1996 study. Hence, no mathematical transformations from (x, y, z) into spherical coordinates are required (such transformations involve more numerical error propagation, more computer program coding, and slightly more CPU computing time). The new algorithm also has a more realistic definition of source altitude that accounts for Earth oblateness (this can become important for sources that are hundreds of kilometers away from the network). In addition, the new algorithm is being applied to analyze computer simulated LMA datasets in order to obtain detailed location/time retrieval error maps for sources in and around the LMA network. These maps will provide a more comprehensive analysis of retrieval errors for LMA than the 1996 study did of LDAR retrieval errors. Finally, we note that the new algorithm can be applied to LDAR, and essentially any other multi-station TWA network that depends on direct line-of-site antenna excitation.

  2. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  3. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  4. Progress in Guidance and Control Research for Space Access and Hypersonic Vehicles (Preprint)

    DTIC Science & Technology

    2006-09-01

    affect range capabilities. In 2003 an integrated adaptive guidance control and trajectory re- shaping algorithm was flight demonstrated using in-flight...21] which tied for the best scores as well as a Linear Quadratic Regulator[22], Predictor - Corrector [23], and Shuttle-like entry[24] guidance method...Accurate knowledge of mass, center- of-gravity and moments of inertia improves the perfor- mance of not only IAG& C algorithms but also model based

  5. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  6. Convergence of the Ponderomotive Guiding Center approximation in the LWFA

    NASA Astrophysics Data System (ADS)

    Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis

    2017-10-01

    Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.

  7. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  8. Optimization of over-provisioned clouds

    NASA Astrophysics Data System (ADS)

    Balashov, N.; Baranov, A.; Korenkov, V.

    2016-09-01

    The functioning of modern applications in cloud-centers is characterized by a huge variety of computational workloads generated. This causes uneven workload distribution and as a result leads to ineffective utilization of cloud-centers' hardware. The proposed article addresses the possible ways to solve this issue and demonstrates that it is a matter of necessity to optimize cloud-centers' hardware utilization. As one of the possible ways to solve the problem of the inefficient resource utilization in heterogeneous cloud-environments an algorithm of dynamic re-allocation of virtual resources is suggested.

  9. Feasibility of web-based self-triage by parents of children with influenza-like illness: a cautionary tale.

    PubMed

    Anhang Price, Rebecca; Fagbuyi, Daniel; Harris, Racine; Hanfling, Dan; Place, Frederick; Taylor, Todd B; Kellermann, Arthur L

    2013-02-01

    Self-triage using web-based decision support could be a useful way to encourage appropriate care-seeking behavior and reduce health system surge in epidemics. However, the feasibility and safety of this strategy have not previously been evaluated. To assess the usability and safety of Strategy for Off-site Rapid Triage (SORT) for Kids, a web-based decision support tool designed to translate clinical guidance developed by the Centers for Disease Control and Prevention to help parents and adult caregivers determine if a child with influenza-like illness requires immediate care in an emergency department (ED). Prospective pilot validation study conducted between February 8 and April 30, 2012. Staff who abstracted medical records and made follow-up calls were blinded to the SORT algorithm's assessment of the child's level of risk. Two pediatric emergency departments in the National Capital Region. Convenience sample of 294 parents and adult caregivers who were at least 18 years of age; able to read and speak English; and the parent or legal guardian of a child 18 years or younger presenting to 1 of 2 EDs with signs and symptoms meeting Centers for Disease Control and Prevention criteria for influenza-like illness. Completion of the SORT for Kids survey. Caregiver ratings of the website's usability and the sensitivity of the underlying algorithm for identifying children who required immediate ED management of influenza-like illness, defined as receipt of 1 or more of 5 essential clinical services. Ninety percent of participants reported that the website was "very easy" to understand and use. Ratings did not differ by respondent race, ethnicity, or educational attainment. Of the 15 patients whose initial ED visit met explicit criteria for clinical necessity, the Centers for Disease Control and Prevention algorithm classified 14 as high risk, resulting in an overall sensitivity of 93.3% (exact 95% CI, 68.1%-99.8%). Specificity of the algorithm was poor. This pilot study suggests that web-based decision support to help parents and adult caregivers self-triage children with influenza-like illness is feasible. However, prospective refinement of the clinical algorithm is needed to improve its specificity without compromising patient safety.

  10. Algorithms for Port-of-Entry Inspection

    DTIC Science & Technology

    2007-05-29

    Devdatt Lad, Rutgers University, Center for Advanced Information Processing Mingyu Li, Rutgers University, Statistics Francesco Longo, University of...Industrial and Systems Engineering graduate student Devdatt Lad, Rutgers University, Electrical & Computer Engineering, graduate student Mingyu Li

  11. Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique

    NASA Astrophysics Data System (ADS)

    Mahootchi, M.; Fattahi, M.; Khakbazan, E.

    2011-11-01

    This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.

  12. Remote sensing imagery classification using multi-objective gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2016-10-01

    Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.

  13. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  14. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  15. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  16. Distribution path robust optimization of electric vehicle with multiple distribution centers

    PubMed Central

    Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi

    2018-01-01

    To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169

  17. Approximating Multivariate Normal Orthant Probabilities Using the Clark Algorithm.

    DTIC Science & Technology

    1987-07-15

    Kent Eaton Army Research Institute Dr. Hans Crombag 5001 Eisenhower Avenue University of Leyden Alexandria, VA 22333 Education Research Center...Boerhaavelaan 2 Dr. John M. Eddins 2334 EN Leyden University of Illinois The NETHERLANDS 252 Engineering Research Laboratory Mr. Timothy Davey 103 South...Education and Training Ms. Kathleen Moreno Naval Air Station Navy Personnel R&D Center Pensacola, FL 32508 Code 62 San Diego, CA 92152-6800 Dr. Gary Marco

  18. An intelligent algorithm for identification of optimum mix of demographic features for trust in medical centers in Iran.

    PubMed

    Yazdanparast, R; Zadeh, S Abdolhossein; Dadras, D; Azadeh, A

    2018-06-01

    Healthcare quality is affected by various factors including trust. Patients' trust to healthcare providers is one of the most important factors for treatment outcomes. The presented study identifies optimum mixture of patient demographic features with respect to trust in three large and busy medical centers in Tehran, Iran. The presented algorithm is composed of adaptive neuro-fuzzy inference system and statistical methods. It is used to deal with data and environmental uncertainty. The required data are collected from three large hospitals using standard questionnaires. The reliability and validity of the collected data is evaluated using Cronbach's Alpha, factor analysis and statistical tests. The results of this study indicate that middle age patients with low level of education and moderate illness severity and young patients with high level of education, moderate illness severity and moderate to weak financial status have the highest trust to the considered medical centers. To the best of our knowledge this the first study that investigates patient demographic features using adaptive neuro-fuzzy inference system in healthcare sector. Second, it is a practical approach for continuous improvement of trust features in medical centers. Third, it deals with the existing uncertainty through the unique neuro-fuzzy approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  20. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  1. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  2. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  3. Challenges in congenital syphilis surveillance: how are congenital syphilis investigations classified?

    PubMed

    Introcaso, Camille E; Gruber, DeAnn; Bradley, Heather; Peterman, Thomas A; Ewell, Joy; Wendell, Debbie; Foxhood, Joseph; Su, John R; Weinstock, Hillard S; Markowitz, Lauri E

    2013-09-01

    Congenital syphilis is a serious, preventable, and nationally notifiable disease. Despite the existence of a surveillance case definition, congenital syphilis is sometimes classified differently using an algorithm on the Centers for Disease Control and Prevention's case reporting form. We reviewed Louisiana's congenital syphilis electronic reporting system for investigations of infants born from January 2010 to October 2011, abstracted data required for classification, and applied the surveillance definition and the algorithm. We calculated the sensitivities and specificities of the algorithm and Louisiana's classification using the surveillance definition as the surveillance gold standard. Among 349 congenital syphilis investigations, the surveillance definition identified 62 cases. The algorithm had a sensitivity of 91.9% and a specificity of 64.1%. Louisiana's classification had a sensitivity of 50% and a specificity of 91.3% compared with the surveillance definition. The differences between the algorithm and the surveillance definition led to misclassification of congenital syphilis cases. The algorithm should match the surveillance definition. Other state and local health departments should assure that their reported cases meet the surveillance definition.

  4. Research and implementation of finger-vein recognition algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  5. Identifying protein complexes based on brainstorming strategy.

    PubMed

    Shen, Xianjun; Zhou, Jin; Yi, Li; Hu, Xiaohua; He, Tingting; Yang, Jincai

    2016-11-01

    Protein complexes comprising of interacting proteins in protein-protein interaction network (PPI network) play a central role in driving biological processes within cells. Recently, more and more swarm intelligence based algorithms to detect protein complexes have been emerging, which have become the research hotspot in proteomics field. In this paper, we propose a novel algorithm for identifying protein complexes based on brainstorming strategy (IPC-BSS), which is integrated into the main idea of swarm intelligence optimization and the improved K-means algorithm. Distance between the nodes in PPI network is defined by combining the network topology and gene ontology (GO) information. Inspired by human brainstorming process, IPC-BSS algorithm firstly selects the clustering center nodes, and then they are separately consolidated with the other nodes with short distance to form initial clusters. Finally, we put forward two ways of updating the initial clusters to search optimal results. Experimental results show that our IPC-BSS algorithm outperforms the other classic algorithms on yeast and human PPI networks, and it obtains many predicted protein complexes with biological significance. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-Test Results

    NASA Technical Reports Server (NTRS)

    Brown, Nelson Andrew; Schaefer, Jacob Robert

    2013-01-01

    A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.

  7. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  8. Research on Abnormal Detection Based on Improved Combination of K - means and SVDD

    NASA Astrophysics Data System (ADS)

    Hao, Xiaohong; Zhang, Xiaofeng

    2018-01-01

    In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.

  9. Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-test Results

    NASA Technical Reports Server (NTRS)

    Brown, Nelson Andrew; Schaefer, Jacob Robert

    2013-01-01

    A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.

  10. Computer analysis of three-dimensional morphological characteristics of the bile duct

    NASA Astrophysics Data System (ADS)

    Ma, Jinyuan; Chen, Houjin; Peng, Yahui; Shang, Hua

    2017-01-01

    In this paper, a computer image-processing algorithm for analyzing the morphological characteristics of bile ducts in Magnetic Resonance Cholangiopancreatography (MRCP) images was proposed. The algorithm consisted of mathematical morphology methods including erosion, closing and skeletonization, and a spline curve fitting method to obtain the length and curvature of the center line of the bile duct. Of 10 cases, the average length of the bile duct was 14.56 cm. The maximum curvature was in the range of 0.111 2.339. These experimental results show that using the computer image-processing algorithm to assess the morphological characteristics of the bile duct is feasible and further research is needed to evaluate its potential clinical values.

  11. Western Trauma Association Critical Decisions in Trauma: Management of rib fractures.

    PubMed

    Brasel, Karen J; Moore, Ernest E; Albrecht, Roxie A; deMoya, Marc; Schreiber, Martin; Karmy-Jones, Riyad; Rowell, Susan; Namias, Nicholas; Cohen, Mitchell; Shatz, David V; Biffl, Walter L

    2017-01-01

    This is a recommended management algorithm from the Western Trauma Association addressing the management of adult patients with rib fractures. Because there is a paucity of published prospective randomized clinical trials that have generated Class I data, these recommendations are based primarily on published observational studies and expert opinion of Western Trauma Association members. The algorithm and accompanying comments represent a safe and sensible approach that can be followed at most trauma centers. We recognize that there will be patient, personnel, institutional, and situational factors that may warrant or require deviation from the recommended algorithm. We encourage institutions to use this as a guideline to develop their own local protocols.

  12. Time-frequency analysis-based time-windowing algorithm for the inverse synthetic aperture radar imaging of ships

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong

    2018-01-01

    An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.

  13. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  14. Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.

    PubMed

    Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth

    2018-01-01

    Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.

  15. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  16. Chaotic Time Series Analysis Method Developed for Stall Precursor Identification in High-Speed Compressors

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A new technique for rotating stall precursor identification in high-speed compressors has been developed at the NASA Lewis Research Center. This pseudo correlation integral method uses a mathematical algorithm based on chaos theory to identify nonlinear dynamic changes in the compressor. Through a study of four various configurations of a high-speed compressor stage, a multistage compressor rig, and an axi-centrifugal engine test, this algorithm, using only a single pressure sensor, has consistently predicted the onset of rotating stall.

  17. Introduction to Numerical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  18. The Sixth Decision Regarding Perforated Duodenal Ulcer

    PubMed Central

    McMahon, Ross L.; Kakihara, Minoru; Pappas, Theodore N.; Eubanks, Steve

    2002-01-01

    This presentation reviews the literature regarding the current surgical treatment of perforated ulcers, describes the surgical techniques for laparoscopic repair, and reviews the clinical algorithm used by laparoscopic surgeons at Duke University Medical Center. PMID:12500837

  19. AUV Positioning Method Based on Tightly Coupled SINS/LBL for Underwater Acoustic Multipath Propagation.

    PubMed

    Zhang, Tao; Shi, Hongfei; Chen, Liping; Li, Yao; Tong, Jinwu

    2016-03-11

    This paper researches an AUV (Autonomous Underwater Vehicle) positioning method based on SINS (Strapdown Inertial Navigation System)/LBL (Long Base Line) tightly coupled algorithm. This algorithm mainly includes SINS-assisted searching method of optimum slant-range of underwater acoustic propagation multipath, SINS/LBL tightly coupled model and multi-sensor information fusion algorithm. Fuzzy correlation peak problem of underwater LBL acoustic propagation multipath could be solved based on SINS positional information, thus improving LBL positional accuracy. Moreover, introduction of SINS-centered LBL locating information could compensate accumulative AUV position error effectively and regularly. Compared to loosely coupled algorithm, this tightly coupled algorithm can still provide accurate location information when there are fewer than four available hydrophones (or within the signal receiving range). Therefore, effective positional calibration area of tightly coupled system based on LBL array is wider and has higher reliability and fault tolerance than loosely coupled. It is more applicable to AUV positioning based on SINS/LBL.

  20. AUV Positioning Method Based on Tightly Coupled SINS/LBL for Underwater Acoustic Multipath Propagation

    PubMed Central

    Zhang, Tao; Shi, Hongfei; Chen, Liping; Li, Yao; Tong, Jinwu

    2016-01-01

    This paper researches an AUV (Autonomous Underwater Vehicle) positioning method based on SINS (Strapdown Inertial Navigation System)/LBL (Long Base Line) tightly coupled algorithm. This algorithm mainly includes SINS-assisted searching method of optimum slant-range of underwater acoustic propagation multipath, SINS/LBL tightly coupled model and multi-sensor information fusion algorithm. Fuzzy correlation peak problem of underwater LBL acoustic propagation multipath could be solved based on SINS positional information, thus improving LBL positional accuracy. Moreover, introduction of SINS-centered LBL locating information could compensate accumulative AUV position error effectively and regularly. Compared to loosely coupled algorithm, this tightly coupled algorithm can still provide accurate location information when there are fewer than four available hydrophones (or within the signal receiving range). Therefore, effective positional calibration area of tightly coupled system based on LBL array is wider and has higher reliability and fault tolerance than loosely coupled. It is more applicable to AUV positioning based on SINS/LBL. PMID:26978361

  1. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  2. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    NASA Astrophysics Data System (ADS)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  3. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  4. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.

  5. New Operational Algorithms for Particle Data from Low-Altitude Polar-Orbiting Satellites

    NASA Astrophysics Data System (ADS)

    Machol, J. L.; Green, J. C.; Rodriguez, J. V.; Onsager, T. G.; Denig, W. F.

    2010-12-01

    As part of the algorithm development effort started under the former National Polar-orbiting Operational Environmental Satellite System (NPOESS) program, the NOAA Space Weather Prediction Center (SWPC) is developing operational algorithms for the next generation of low-altitude polar-orbiting weather satellites. This presentation reviews the two new algorithms on which SWPC has focused: Energetic Ions (EI) and Auroral Energy Deposition (AED). Both algorithms take advantage of the improved performance of the Space Environment Monitor - Next (SEM-N) sensors over earlier SEM instruments flown on NOAA Polar Orbiting Environmental Satellites (POES). The EI algorithm iterates a piecewise power law fit in order to derive a differential energy flux spectrum for protons with energies from 10-250 MeV. The algorithm provides the data in physical units (MeV/cm2-s-str-keV) instead of just counts/s as was done in the past, making the data generally more useful and easier to integrate into higher level products. The AED algorithm estimates the energy flux deposited into the atmosphere by precipitating low- and medium-energy charged particles. The AED calculations include particle pitch-angle distributions, information that was not available from POES. This presentation also describes methods that we are evaluating for creating higher level products that would specify the global particle environment based on real time measurements.

  6. Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.

    2004-01-01

    The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.

  7. Dosimetry audit simulation of treatment planning system in multicenters radiotherapy

    NASA Astrophysics Data System (ADS)

    Kasmuri, S.; Pawiro, S. A.

    2017-07-01

    Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.

  8. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  9. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  10. Laser Spot Center Detection and Comparison Test

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Xu, Zhengjie; Fu, Deli; Hu, Cong

    2018-04-01

    High efficiency and precision of the pot center detection are the foundations of avionics instrument navigation and optics measurement basis for many applications. It has noticeable impact on overall system performance. Among them, laser spot detection is very important in the optical measurement technology. In order to improve the low accuracy of the spot center position, the algorithm is improved on the basis of the circle fitting. The pretreatment is used by circle fitting, and the improved adaptive denoising filter for TV repair technology can effectively improves the accuracy of the spot center position. At the same time, the pretreatment and de-noising can effectively reduce the influence of Gaussian white noise, which enhances the anti-jamming capability.

  11. Radial basis function neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Dhawan, Atam P.

    1993-01-01

    This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.

  12. RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

    DTIC Science & Technology

    2015-09-30

    NOAA ), Robin Hogan (ECMWF), a number of colleagues at the Max-Planck Institute, and Will Sawyer and Marcus Wetzstein (Swiss Supercomputer Center...somewhat out of date, so that the accuracy of our simplified algorithms can not be thoroughly evaluated. RRTMGP_LW_v0 has been provided to our NASA ...support, RRTMGP_LW_v0, has been completed and distributed to selected colleagues at modeling centers, including NOAA , NCAR, and CSCS. Our colleagues

  13. Accuracy assessment of pharmacogenetically predictive warfarin dosing algorithms in patients of an academic medical center anticoagulation clinic.

    PubMed

    Shaw, Paul B; Donovan, Jennifer L; Tran, Maichi T; Lemon, Stephenie C; Burgwinkle, Pamela; Gore, Joel

    2010-08-01

    The objectives of this retrospective cohort study are to evaluate the accuracy of pharmacogenetic warfarin dosing algorithms in predicting therapeutic dose and to determine if this degree of accuracy warrants the routine use of genotyping to prospectively dose patients newly started on warfarin. Seventy-one patients of an outpatient anticoagulation clinic at an academic medical center who were age 18 years or older on a stable, therapeutic warfarin dose with international normalized ratio (INR) goal between 2.0 and 3.0, and cytochrome P450 isoenzyme 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) genotypes available between January 1, 2007 and September 30, 2008 were included. Six pharmacogenetic warfarin dosing algorithms were identified from the medical literature. Additionally, a 5 mg fixed dose approach was evaluated. Three algorithms, Zhu et al. (Clin Chem 53:1199-1205, 2007), Gage et al. (J Clin Ther 84:326-331, 2008), and International Warfarin Pharmacogenetic Consortium (IWPC) (N Engl J Med 360:753-764, 2009) were similar in the primary accuracy endpoints with mean absolute error (MAE) ranging from 1.7 to 1.8 mg/day and coefficient of determination R (2) from 0.61 to 0.66. However, the Zhu et al. algorithm severely over-predicted dose (defined as >or=2x or >or=2 mg/day more than actual dose) in twice as many (14 vs. 7%) patients as Gage et al. 2008 and IWPC 2009. In conclusion, the algorithms published by Gage et al. 2008 and the IWPC 2009 were the two most accurate pharmacogenetically based equations available in the medical literature in predicting therapeutic warfarin dose in our study population. However, the degree of accuracy demonstrated does not support the routine use of genotyping to prospectively dose all patients newly started on warfarin.

  14. Moving target parameter estimation of SAR after two looks cancellation

    NASA Astrophysics Data System (ADS)

    Gan, Rongbing; Wang, Jianguo; Gao, Xiang

    2005-11-01

    Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.

  15. A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying

    2018-03-01

    Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.

  16. Photometric analysis in the Kepler Science Operations Center pipeline

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.

    2010-07-01

    We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  17. Implementation of the Algorithm for Congestion control in the Dynamic Circuit Network (DCN)

    NASA Astrophysics Data System (ADS)

    Nalamwar, H. S.; Ivanov, M. A.; Buddhawar, G. U.

    2017-01-01

    Transport Control Protocol (TCP) incast congestion happens when a number of senders work in parallel with the same server where the high bandwidth and low latency network problem occurs. For many data center network applications such as a search engine, heavy traffic is present on such a server. Incast congestion degrades the entire performance as packets are lost at a server side due to buffer overflow, and as a result, the response time becomes longer. In this work, we focus on TCP throughput, round-trip time (RTT), receive window and retransmission. Our method is based on the proactive adjust of the TCP receive window before the packet loss occurs. We aim to avoid the wastage of the bandwidth by adjusting its size as per the number of packets. To avoid the packet loss, the ICTCP algorithm has been implemented in the data center network (ToR).

  18. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  19. Photometric Analysis in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.

    2010-01-01

    We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  20. Ground-truth collections at the MTI core sites

    NASA Astrophysics Data System (ADS)

    Garrett, Alfred J.; Kurzeja, Robert J.; Parker, Matthew J.; O'Steen, Byron L.; Pendergast, Malcolm M.; Villa-Aleman, Eliel

    2001-08-01

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.

  1. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  2. Small target detection using objectness and saliency

    NASA Astrophysics Data System (ADS)

    Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao

    2017-10-01

    We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.

  3. Parameter identification for structural dynamics based on interval analysis algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  4. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    PubMed

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  5. A new event detector designed for the Seismic Research Observatories

    USGS Publications Warehouse

    Murdock, James N.; Hutt, Charles R.

    1983-01-01

    A new short-period event detector has been implemented on the Seismic Research Observatories. For each signal detected, a printed output gives estimates of the time of onset of the signal, direction of the first break, quality of onset, period and maximum amplitude of the signal, and an estimate of the variability of the background noise. On the SRO system, the new algorithm runs ~2.5x faster than the former (power level) detector. This increase in speed is due to the design of the algorithm: all operations can be performed by simple shifts, additions, and comparisons (floating point operations are not required). Even though a narrow-band recursive filter is not used, the algorithm appears to detect events competitively with those algorithms that employ such filters. Tests at Albuquerque Seismological Laboratory on data supplied by Blandford suggest performance commensurate with the on-line detector of the Seismic Data Analysis Center, Alexandria, Virginia.

  6. A Fusion Algorithm for GFP Image and Phase Contrast Image of Arabidopsis Cell Based on SFL-Contourlet Transform

    PubMed Central

    Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling

    2013-01-01

    A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716

  7. Research on hotspot discovery in internet public opinions based on improved K-means.

    PubMed

    Wang, Gensheng

    2013-01-01

    How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice.

  8. Research on Hotspot Discovery in Internet Public Opinions Based on Improved K-Means

    PubMed Central

    2013-01-01

    How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice. PMID:24106496

  9. Evaluation of algorithms for estimating wheat acreage from multispectral scanner data. [Kansas and Texas

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.

    1976-01-01

    The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.

  10. Data Products From Particle Detectors On-Board NOAA's Newest Space Weather Monitor

    NASA Astrophysics Data System (ADS)

    Kress, B. T.; Rodriguez, J. V.; Onsager, T. G.

    2017-12-01

    NOAA's newest Geostationary Operational Environmental Satellite, GOES-16, was launched on 19 November 2016. Instrumentation on-board GOES-16 includes the new Space Environment In-Situ Suite (SEISS), which has been collecting data since 8 January 2017. SEISS is composed of five magnetospheric particle sensor units: an electrostatic analyzer for measuring 30 eV - 30 keV ions and electrons (MPS-LO), a high energy particle sensor (MPS-HI) that measures keV to MeV electrons and protons, east and west facing Solar and Galactic Proton Sensor (SGPS) units with 13 differential channels between 1-500 MeV, and an Energetic Heavy Ion Sensor (EHIS) that measures 30 species of heavy ions (He-Ni) in five energy bands in the 10-200 MeV/nuc range. Measurement of low energy magnetospheric particles by MPS-LO and heavy ions by EHIS are new capabilities not previously flown on the GOES system. Real-time data from GOES-16 will support space weather monitoring and first-principles space weather modeling by NOAA's Space Weather Prediction Center (SWPC). Space weather level 2+ data products under development at NOAA's National Centers for Environmental Information (NCEI) include the Solar Energetic Particle (SEP) Event Detection algorithm. Legacy components of the SEP event detection algorithm (currently produced by SWPC) include the Solar Radiation Storm Scales. New components will include, e.g., event fluences. New level 2+ data products also include the SEP event Linear Energy Transfer (LET) Algorithm, for transforming energy spectra from EHIS into LET spectra, and the Density and Temperature Moments and Spacecraft Charging algorithm. The moments and charging algorithm identifies electron and ion signatures of spacecraft surface (frame) charging in the MPS-LO fluxes. Densities and temperatures from MPS-LO will also be used to support a magnetopause crossing detection algorithm. The new data products will provide real-time indicators of potential radiation hazards for the satellite community and data for future studies of space weather effects. This presentation will include an overview of these algorithms and examples of their performance during recent co-rotation interaction region (CIR) associated radiation belt enhancements and a solar particle event on 14-15 July 2017.

  11. Effectiveness and safety of procalcitonin-guided antibiotic therapy in lower respiratory tract infections in "real life": an international, multicenter poststudy survey (ProREAL).

    PubMed

    Albrich, Werner C; Dusemund, Frank; Bucher, Birgit; Meyer, Stefan; Thomann, Robert; Kühn, Felix; Bassetti, Stefano; Sprenger, Martin; Bachli, Esther; Sigrist, Thomas; Schwietert, Martin; Amin, Devendra; Hausfater, Pierre; Carre, Eric; Gaillat, Jacques; Schuetz, Philipp; Regez, Katharina; Bossart, Rita; Schild, Ursula; Mueller, Beat

    2012-05-14

    In controlled studies, procalcitonin (PCT) has safely and effectively reduced antibiotic drug use for lower respiratory tract infections (LRTIs). However, controlled trial data may not reflect real life. We performed an observational quality surveillance in 14 centers in Switzerland, France, and the United States. Consecutive adults with LRTI presenting to emergency departments or outpatient offices were enrolled and registered on a website, which provided a previously published PCT algorithm for antibiotic guidance. The primary end point was duration of antibiotic therapy within 30 days. Of 1759 patients, 86.4% had a final diagnosis of LRTI (community-acquired pneumonia, 53.7%; acute exacerbation of chronic obstructive pulmonary disease, 17.1%; and bronchitis, 14.4%). Algorithm compliance overall was 68.2%, with differences between diagnoses (bronchitis, 81.0%; AECOPD, 70.1%; and community-acquired pneumonia, 63.7%; P < .001), outpatients (86.1%) and inpatients (65.9%) (P < .001), algorithm-experienced (82.5%) and algorithm-naive (60.1%) centers (P < .001), and countries (Switzerland, 75.8%; France, 73.5%; and the United States, 33.5%; P < .001). After multivariate adjustment, antibiotic therapy duration was significantly shorter if the PCT algorithm was followed compared with when it was overruled (5.9 vs 7.4 days; difference, -1.51 days; 95% CI, -2.04 to -0.98; P < .001). No increase was noted in the risk of the combined adverse outcome end point within 30 days of follow-up when the PCT algorithm was followed regarding withholding antibiotics on hospital admission (adjusted odds ratio, 0.83; 95% CI, 0.44 to 1.55; P = .56) and regarding early cessation of antibiotics (adjusted odds ratio, 0.61; 95% CI, 0.36 to 1.04; P = .07). This study validates previous results from controlled trials in real-life conditions and demonstrates that following a PCT algorithm effectively reduces antibiotic use without increasing the risk of complications. Preexisting differences in antibiotic prescribing affect compliance with antibiotic stewardship efforts. isrctn.org Identifier: ISRCTN40854211.

  12. Use of sexually transmitted disease risk assessment algorithms for selection of intrauterine device candidates.

    PubMed

    Morrison, C S; Sekadde-Kigondu, C; Miller, W C; Weiner, D H; Sinei, S K

    1999-02-01

    Sexually transmitted diseases (STD) are an important contraindication for intrauterine device (IUD) insertion. Nevertheless, laboratory testing for STD is not possible in many settings. The objective of this study is to evaluate the use of risk assessment algorithms to predict STD and subsequent IUD-related complications among IUD candidates. Among 615 IUD users in Kenya, the following algorithms were evaluated: 1) an STD algorithm based on US Agency for International Development (USAID) Technical Working Group guidelines: 2) a Centers for Disease Control and Prevention (CDC) algorithm for management of chlamydia; and 3) a data-derived algorithm modeled from study data. Algorithms were evaluated for prediction of chlamydial and gonococcal infection at 1 month and complications (pelvic inflammatory disease [PID], IUD removals, and IUD expulsions) over 4 months. Women with STD were more likely to develop complications than women without STD (19% vs 6%; risk ratio = 2.9; 95% CI 1.3-6.5). For STD prediction, the USAID algorithm was 75% sensitive and 48% specific, with a positive likelihood ratio (LR+) of 1.4. The CDC algorithm was 44% sensitive and 72% specific, LR+ = 1.6. The data-derived algorithm was 91% sensitive and 56% specific, with LR+ = 2.0 and LR- = 0.2. Category-specific LR for this algorithm identified women with very low (< 1%) and very high (29%) infection probabilities. The data-derived algorithm was also the best predictor of IUD-related complications. These results suggest that use of STD algorithms may improve selection of IUD users. Women at high risk for STD could be counseled to avoid IUD, whereas women at moderate risk should be monitored closely and counseled to use condoms.

  13. Validation of an International Classification of Diseases, Ninth Revision Code Algorithm for Identifying Chiari Malformation Type 1 Surgery in Adults.

    PubMed

    Greenberg, Jacob K; Ladner, Travis R; Olsen, Margaret A; Shannon, Chevis N; Liu, Jingxia; Yarbrough, Chester K; Piccirillo, Jay F; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2015-08-01

    The use of administrative billing data may enable large-scale assessments of treatment outcomes for Chiari Malformation type I (CM-1). However, to utilize such data sets, validated International Classification of Diseases, Ninth Revision (ICD-9-CM) code algorithms for identifying CM-1 surgery are needed. To validate 2 ICD-9-CM code algorithms identifying patients undergoing CM-1 decompression surgery. We retrospectively analyzed the validity of 2 ICD-9-CM code algorithms for identifying adult CM-1 decompression surgery performed at 2 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-1), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression, or laminectomy). Algorithm 2 restricted this group to patients with a primary diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Among 340 first-time admissions identified by Algorithm 1, the overall PPV for CM-1 decompression was 65%. Among the 214 admissions identified by Algorithm 2, the overall PPV was 99.5%. The PPV for Algorithm 1 was lower in the Vanderbilt (59%) cohort, males (40%), and patients treated between 2009 and 2013 (57%), whereas the PPV of Algorithm 2 remained high (≥99%) across subgroups. The sensitivity of Algorithms 1 (86%) and 2 (83%) were above 75% in all subgroups. ICD-9-CM code Algorithm 2 has excellent PPV and good sensitivity to identify adult CM-1 decompression surgery. These results lay the foundation for studying CM-1 treatment outcomes by using large administrative databases.

  14. Prevalence of Traditional and Reverse-Algorithm Syphilis Screening in Laboratory Practice: A Survey of Participants in the College of American Pathologists Syphilis Serology Proficiency Testing Program.

    PubMed

    Rhoads, Daniel D; Genzen, Jonathan R; Bashleben, Christine P; Faix, James D; Ansari, M Qasim

    2017-01-01

    -Syphilis serology screening in laboratory practice is evolving. Traditionally, the syphilis screening algorithm begins with a nontreponemal immunoassay, which is manually performed by a laboratory technologist. In contrast, the reverse algorithm begins with a treponemal immunoassay, which can be automated. The Centers for Disease Control and Prevention has recognized both approaches, but little is known about the current state of laboratory practice, which could impact test utilization and interpretation. -To assess the current state of laboratory practice for syphilis serologic screening. -In August 2015, a voluntary questionnaire was sent to the 2360 laboratories that subscribe to the College of American Pathologists syphilis serology proficiency survey. -Of the laboratories surveyed, 98% (2316 of 2360) returned the questionnaire, and about 83% (1911 of 2316) responded to at least some questions. Twenty-eight percent (378 of 1364) reported revision of their syphilis screening algorithm within the past 2 years, and 9% (170 of 1905) of laboratories anticipated changing their screening algorithm in the coming year. Sixty-three percent (1205 of 1911) reported using the traditional algorithm, 16% (304 of 1911) reported using the reverse algorithm, and 2.5% (47 of 1911) reported using both algorithms, whereas 9% (169 of 1911) reported not performing a reflex confirmation test. Of those performing the reverse algorithm, 74% (282 of 380) implemented a new testing platform when introducing the new algorithm. -The majority of laboratories still perform the traditional algorithm, but a significant minority have implemented the reverse-screening algorithm. Although the nontreponemal immunologic response typically wanes after cure and becomes undetectable, treponemal immunoassays typically remain positive for life, and it is important for laboratorians and clinicians to consider these assay differences when implementing, using, and interpreting serologic syphilis screening algorithms.

  15. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  16. Transition Marshall Space Flight Center Wind Profiler Splicing Algorithm to Launch Services Program Upper Winds Tool

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2014-01-01

    NASAs LSP customers and the future SLS program rely on observations of upper-level winds for steering, loads, and trajectory calculations for the launch vehicles flight. On the day of launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds and provide forecasts to the launch team via the AMU-developed LSP Upper Winds tool for launches at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station. This tool displays wind speed and direction profiles from rawinsondes released during launch operations, the 45th Space Wing 915-MHz Doppler Radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP, and output from numerical weather prediction models.The goal of this task was to splice the wind speed and direction profiles from the 45th Space Wing (45 SW) 915-MHz Doppler radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP at altitudes where the wind profiles overlap to create a smooth profile. In the first version of the LSP Upper Winds tool, the top of the 915-MHz DRWP wind profile and the bottom of the 50-MHz DRWP were not spliced, sometimes creating a discontinuity in the profile. The Marshall Space Flight Center (MSFC) Natural Environments Branch (NE) created algorithms to splice the wind profiles from the two sensors to generate an archive of vertically complete wind profiles for the SLS program. The AMU worked with MSFC NE personnel to implement these algorithms in the LSP Upper Winds tool to provide a continuous spliced wind profile.The AMU transitioned the MSFC NE algorithms to interpolate and fill data gaps in the data, implement a Gaussian weighting function to produce 50-m altitude intervals in each sensor, and splice the data together from both DRWPs. They did so by porting the MSFC NE code written with MATLAB software into Microsoft Excel Visual Basic for Applications (VBA). After testing the new algorithms in stand-alone VBA modules, the AMU replaced the existing VBA code in the LSP Upper Winds tool with the new algorithms. They then tested the code in the LSP Upper Winds tool with archived data. The tool will be delivered to the 45 WS after the 50-MHz DRWP upgrade is complete and the tool is tested with real-time data. The 50-MHz DRWP upgrade is expected to be finished in October 2014.

  17. Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Jensen, David; Poll, Scott

    2009-01-01

    Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.

  18. Classification of posture maintenance data with fuzzy clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1992-01-01

    Sensory inputs from the visual, vestibular, and proprioreceptive systems are integrated by the central nervous system to maintain postural equilibrium. Sustained exposure to microgravity causes neurosensory adaptation during spaceflight, which results in decreased postural stability until readaptation occurs upon return to the terrestrial environment. Data which simulate sensory inputs under various sensory organization test (SOT) conditions were collected in conjunction with Johnson Space Center postural control studies using a tilt-translation device (TTD). The University of West Florida applied the fuzzy c-meams (FCM) clustering algorithms to this data with a view towards identifying various states and stages of subjects experiencing such changes. Feature analysis, time step analysis, pooling data, response of the subjects, and the algorithms used are discussed.

  19. Fast decoder for local quantum codes using Groebner basis

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  20. Birefringence dispersion compensation demodulation algorithm for polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Wu, Fan

    2013-08-15

    A demodulation algorithm based on the birefringence dispersion characteristics for a polarized low-coherence interferometer is proposed. With the birefringence dispersion parameter taken into account, the mathematical model of the polarized low-coherence interference fringes is established and used to extract phase shift information between the measured coherence envelope center and the zero-order fringe, which eliminates the interferometric 2 π ambiguity of locating the zero-order fringe. A pressure measurement experiment using an optical fiber Fabry-Perot pressure sensor was carried out to verify the effectiveness of the proposed algorithm. The experiment result showed that the demodulation precision was 0.077 kPa in the range of 210 kPa, which was improved by 23 times compared to the traditional envelope detection method.

  1. Parameter Estimation for a Hybrid Adaptive Flight Controller

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje

    2009-01-01

    This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.

  2. Machine vision guided sensor positioning system for leaf temperature assessment

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  3. High Energy Neutrino Physics with NOvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coan, Thomas

    2016-09-09

    Knowledge of the position of energy deposition in “hit” detector cells of the NOvA neutrino detector is required by algorithms for pattern reconstruction and particle identification necessary to interpret the raw data. To increase the accuracy of this process, the majority of NOvA's 350 000 far detector cell shapes, including distortions, were measured as they were constructed. Using a special laser scanning system installed at the site of the NOvA far detector in Ash River, MN, we completed algorithmic development and measured shape parameters for the far detector. The algorithm and the measurements are “published” in NOνA’s document database (docmore » #10389, “Cell Center Finder for the NOνA Far Detector Modules”).« less

  4. Gearbox vibration diagnostic analyzer

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes the Gearbox Vibration Diagnostic Analyzer installed in the NASA Lewis Research Center's 500 HP Helicopter Transmission Test Stand to monitor gearbox testing. The vibration of the gearbox is analyzed using diagnostic algorithms to calculate a parameter indicating damaged components.

  5. RECOVERY ACT: DYNAMIC ENERGY CONSUMPTION MANAGEMENT OF ROUTING TELECOM AND DATA CENTERS THROUGH REAL-TIME OPTIMAL CONTROL (RTOC): Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ron Moon

    This final scientific report documents the Industrial Technology Program (ITP) Stage 2 Concept Development effort on Data Center Energy Reduction and Management Through Real-Time Optimal Control (RTOC). Society is becoming increasingly dependent on information technology systems, driving exponential growth in demand for data center processing and an insatiable appetite for energy. David Raths noted, 'A 50,000-square-foot data center uses approximately 4 megawatts of power, or the equivalent of 57 barrels of oil a day1.' The problem has become so severe that in some cases, users are giving up raw performance for a better balance between performance and energy efficiency. Historically,more » power systems for data centers were crudely sized to meet maximum demand. Since many servers operate at 60%-90% of maximum power while only utilizing an average of 5% to 15% of their capability, there are huge inefficiencies in the consumption and delivery of power in these data centers. The goal of the 'Recovery Act: Decreasing Data Center Energy Use through Network and Infrastructure Control' is to develop a state of the art approach for autonomously and intelligently reducing and managing data center power through real-time optimal control. Advances in microelectronics and software are enabling the opportunity to realize significant data center power savings through the implementation of autonomous power management control algorithms. The first step to realizing these savings was addressed in this study through the successful creation of a flexible and scalable mathematical model (equation) for data center behavior and the formulation of an acceptable low technical risk market introduction strategy leveraging commercial hardware and software familiar to the data center market. Follow-on Stage 3 Concept Development efforts include predictive modeling and simulation of algorithm performance, prototype demonstrations with representative data center equipment to verify requisite performance and continued commercial partnering agreement formation to ensure uninterrupted development, and deployment of the real-time optimal control algorithm. As a software implementable technique for reducing power consumption, the RTOC has two very desirable traits supporting rapid prototyping and ultimately widespread dissemination. First, very little capital is required for implementation. No major infrastructure modifications are required and there is no need to purchase expensive capital equipment. Second, the RTOC can be rolled out incrementally. Therefore, the effectiveness can be proven without a large scale initial roll out. Through the use of the Impact Projections Model provided by the DOE, monetary savings in excess of $100M in 2020 and billions by 2040 are predicted. In terms of energy savings, the model predicts a primary energy displacement of 260 trillion BTUs (33 trillion kWh), or a 50% reduction in server power consumption. The model also predicts a corresponding reduction of pollutants such as SO2 and NOx in excess of 100,000 metric tonnes assuming the RTOC is fully deployed. While additional development and prototyping is required to validate these predictions, the relative low cost and ease of implementation compared to large capital projects makes it an ideal candidate for further investigation.« less

  6. Index Theory-Based Algorithm for the Gradiometer Inverse Problem

    DTIC Science & Technology

    2015-03-28

    greatest distance from the center of mass to an equipotential surface occurs when the generating mass of the admissible potential is from two equal point...point on an equipotential surface to the center of mass occurs when the generating mass is contained in an equatorial great circle with the closest...false, it still has practical utility for our purposes. One can also define DC in any Tangent Plane (TP) to the equipotential surface normal to the

  7. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2012-05-01

    UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left

  8. Expedient Gap Definition Using 3D LADAR

    DTIC Science & Technology

    2006-09-01

    Research and Development Center (ERDC), ASI has developed an algorithm to reduce the 3D point cloud acquired with the LADAR system into sets of 2D...ATO IV.GC.2004.02. The GAP Program is conducted by the U.S. Army Engineer Research and Development Center (ERDC) in conjunction with the U.S. Army...Introduction 1 1 Introduction Background The Battlespace Gap Definition and Defeat ( GAP ) Program is conducted by the U.S. Army Engineer Research and

  9. Future applications of artificial intelligence to Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Friedland, Peter

    1991-01-01

    Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.

  10. SU-E-T-24: Development and Implementation of an Automated Algorithm to Determine Radiation Isocenter, Radiation vs. Light Field Coincidence, and Analyze Strip Tests.

    PubMed

    Hyer, D; Mart, C

    2012-06-01

    The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter using the Electronic Portal Imaging Device (EPID). The phantom could then be used as a static reference point for performing other tests including: radiation vs. light field coincidence, MLC and Jaw strip tests, and Varian Optical Guidance Platform (OGP) calibration. The solution proposed uses a collimator setting of 10×10 cm to acquire EPID images of the new phantom constructed from LEGO® blocks. Images from a number of gantry and collimator angles are analyzed by the software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180 degree collimator rotation to determine if the phantom is centered in the dimension being investigated. The accuracy of the algorithm's measurements were verified by independent measurement to be approximately equal to the detector's pitch. Light versus radiation field as well as MLC and Jaw strip tests are performed using measurements based on the phantom center once located at the radiation isocenter. Reproducibility tests show that the algorithm's results were objectively repeatable. Additionally, the phantom and software are completely independent of linac vendor and this study presents results from two major linac manufacturers. An OGP calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty contained in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment, OGP calibration, and other tests on a monthly basis. © 2012 American Association of Physicists in Medicine.

  11. The influence of center of rotation on the assessment of trabecular bone densitometric and structural properties.

    PubMed

    Sheng, Zhi-Feng; Dai, Ru-Chun; Wu, Xian-Ping; Ma, Yu-Lin; Xu, Kang; Zhang, Yu-Hai; Jiang, Ye-Bin; Liao, Er-Yuan

    2008-12-01

    The center of rotation is a physical location in the microCT scanner, defined by the axis of rotation of the sample stage. This physical location is always well defined during calibration of the instrument and fitted by an appropriate algorithm. However, in real images of limited contrast and with X-ray photon noise, this algorithm exhibits poorer precision and the optimum center of rotation cannot be always acquired. Thus, adjustment by operator is necessary to determine whether the center of rotation was correct, in order that the structural information of the sample can be correctly interpreted. In this paper, the effect of center of rotation on the assessment of densitometric and structural properties of trabecular bone was firstly evaluated. Twenty female Sprague-Dawley rats of 7-month-old were randomly assigned to ovariectomized (OVX) and SHAM-operated (SHAM) groups. The left tibiae were harvested at 3 weeks postoperatively. High resolution microCT was used to identify the densitometric and microstructural properties of trabeculae in the proximal ends of tibia. After CT scanning, the best artificial center of rotation for each scan was obtained. Bone parameters analyses were performed on the centers at different places away from the best artificial center of +/-0.2, +/-0.5, +/-1.0, +/-1.5, and +/-2.0 pixels, respectively. The general linear model (GLM) repeated measures procedure was used to investigate the difference in the parameters between the two groups (OVX vs. SHAM) and the possible effects of center displacements. A significant difference between OVX and SHAM groups was found in all parameters (p < 0.05) except Tb.Th, DA, and BS/BV. TBMD, DA, BS/BV, and Conn.D were decreased while BV/TV and Tb.Th were increased with the center deflection. Variations of these parameters were acceptable when the displacements were limited within +/-1.5 pixels for tBMD, BV/TV, DA, and Conn.D, and +/-1.0 pixels for Tb.Th and BS/BV. These changes were similar in both OVX and SHAM groups. The changing curves of bone parameters vs. centers could be well fitted by quadric regression models, by which the real center could be acquired, and thus the precision of microCT analysis would be improved. There were some inevitable differences between the best artificial and real centers.

  12. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  13. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    NASA Astrophysics Data System (ADS)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  14. Quantitative analysis of cell columns in the cerebral cortex.

    PubMed

    Buxhoeveden, D P; Switala, A E; Roy, E; Casanova, M F

    2000-04-01

    We present a quantified imaging method that describes the cell column in mammalian cortex. The minicolumn is an ideal template with which to examine cortical organization because it is a basic unit of function, complete in itself, which interacts with adjacent and distance columns to form more complex levels of organization. The subtle details of columnar anatomy should reflect physiological changes that have occurred in evolution as well as those that might be caused by pathologies in the brain. In this semiautomatic method, images of Nissl-stained tissue are digitized or scanned into a computer imaging system. The software detects the presence of cell columns and describes details of their morphology and of the surrounding space. Columns are detected automatically on the basis of cell-poor and cell-rich areas using a Gaussian distribution. A line is fit to the cell centers by least squares analysis. The line becomes the center of the column from which the precise location of every cell can be measured. On this basis several algorithms describe the distribution of cells from the center line and in relation to the available surrounding space. Other algorithms use cluster analyses to determine the spatial orientation of every column.

  15. A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming

    NASA Astrophysics Data System (ADS)

    Sahin, Mehmet; Dilek, Ezgi

    2017-11-01

    A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.

  16. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things.

    PubMed

    Yi, Meng; Chen, Qingkui; Xiong, Neal N

    2016-11-03

    This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.

  17. A robust recognition and accurate locating method for circular coded diagonal target

    NASA Astrophysics Data System (ADS)

    Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin

    2017-10-01

    As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.

  18. Angular description for 3D scattering centers

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Raynal, Ann Marie; Ling, Hao; Moore, John; Velten, Vincent J.

    2006-05-01

    The electromagnetic scattered field from an electrically large target can often be well modeled as if it is emanating from a discrete set of scattering centers (see Fig. 1). In the scattering center extraction tool we developed previously based on the shooting and bouncing ray technique, no correspondence is maintained amongst the 3D scattering center extracted at adjacent angles. In this paper we present a multi-dimensional clustering algorithm to track the angular and spatial behaviors of 3D scattering centers and group them into features. The extracted features for the Slicy and backhoe targets are presented. We also describe two metrics for measuring the angular persistence and spatial mobility of the 3D scattering centers that make up these features in order to gather insights into target physics and feature stability. We find that features that are most persistent are also the most mobile and discuss implications for optimal SAR imaging.

  19. Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection.

    PubMed

    Barreto, A M E C; Takei, K; E C, Sabino; Bellesa, M A O; Salles, N A; Barreto, C C; Nishiya, A S; Chamone, D F

    2008-02-01

    We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio > or =95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.

  20. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    NASA Astrophysics Data System (ADS)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  1. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  2. NOVA: A new multi-level logic simulator

    NASA Technical Reports Server (NTRS)

    Miles, L.; Prins, P.; Cameron, K.; Shovic, J.

    1990-01-01

    A new logic simulator that was developed at the NASA Space Engineering Research Center for VLSI Design was described. The simulator is multi-level, being able to simulate from the switch level through the functional model level. NOVA is currently in the Beta test phase and was used to simulate chips designed for the NASA Space Station and the Explorer missions. A new algorithm was devised to simulate bi-directional pass transistors and a preliminary version of the algorithm is presented. The usage of functional models in NOVA is also described and performance figures are presented.

  3. A dual method for optimal control problems with initial and final boundary constraints.

    NASA Technical Reports Server (NTRS)

    Pironneau, O.; Polak, E.

    1973-01-01

    This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.

  4. Algorithm for the prophylaxis of septic complications in orthopedics and traumatology of locomotor system at the Department of Orthopedics at the Postgraduate Medical Education Center in Otwock.

    PubMed

    Białecki, Jerzy; Brychcy, Adrian; Marczyński, Wojciech Józef

    2013-09-18

    Current state of knowledge in a matter of septic complications after procedures with orthopaedic device implantation with particular consideration of THA and TKA is presented in the paper. Implant biocompatibility phenomenon as well as systemic reaction to its presence is also discussed. Algorithm for prophylactics of septic complications followed in Orthopaedic Department of PMEC in Otwock is introduced. Article is based on clinical observations of Orthopaedic Department's PMEC team.  

  5. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  6. Automatic brightness control of laser spot vision inspection system

    NASA Astrophysics Data System (ADS)

    Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2009-10-01

    The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.

  7. Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) algorithm for pulsed Lidar systems.

    DOT National Transportation Integrated Search

    2011-06-20

    In 2008 the FAA tasked the Volpe Center with the development of a government owned processing package capable of performing wake detection, characterization and tracking. : The current paper presents the background, progress, and capabilities to date...

  8. Community Preparedness: Creating a Model for Change

    DTIC Science & Technology

    2010-03-01

    Transtheoretical Stages of Change (after Cancer Prevention Research Center, 2010...100 Figure 32. TTM Staging Algorithm for Adult Smoking (from University of Rhode Island Cancer ...National Cancer Institute, 2005)..................59 Table 2. Concepts in Diffusion of Innovations (from National Cancer Institute, 2005

  9. Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques

    NASA Astrophysics Data System (ADS)

    Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.

    2012-04-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability using global circulation models and regional climate models.

  10. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  11. AATSR Based Volcanic Ash Plume Top Height Estimation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Sundstrom, Anu-Maija; Rodriguez, Edith; de Leeuw, Gerrit

    2015-11-01

    The AATSR Correlation Method (ACM) height estimation algorithm is presented. The algorithm uses Advanced Along Track Scanning Radiometer (AATSR) satellite data to detect volcanic ash plumes and to estimate the plume top height. The height estimate is based on the stereo-viewing capability of the AATSR instrument, which allows to determine the parallax between the satellite's nadir and 55◦ forward views, and thus the corresponding height. AATSR provides an advantage compared to other stereo-view satellite instruments: with AATSR it is possible to detect ash plumes using brightness temperature difference between thermal infrared (TIR) channels centered at 11 and 12 μm. The automatic ash detection makes the algorithm efficient in processing large quantities of data: the height estimate is calculated only for the ash-flagged pixels. Besides ash plumes, the algorithm can be applied to any elevated feature with sufficient contrast to the background, such as smoke and dust plumes and clouds. The ACM algorithm can be applied to the Sea and Land Surface Temperature Radiometer (SLSTR), scheduled for launch at the end of 2015.

  12. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE PAGES

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...

    2017-08-29

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  13. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi

    There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less

  15. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  16. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  17. Analyzing a 35-Year Hourly Data Record: Why So Difficult?

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    At the Goddard Distributed Active Archive Center, we have recently added a 35-Year record of output data from the North American Land Assimilation System (NLDAS) to the Giovanni web-based analysis and visualization tool. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure) offers a variety of data summarization and visualization to users that operate at the data center, obviating the need for users to download and read the data themselves for exploratory data analysis. However, the NLDAS data has proven surprisingly resistant to application of the summarization algorithms. Algorithms that were perfectly happy analyzing 15 years of daily satellite data encountered limitations both at the algorithm and system level for 35 years of hourly data. Failures arose, sometimes unexpectedly, from command line overflows, memory overflows, internal buffer overflows, and time-outs, among others. These serve as an early warning sign for the problems likely to be encountered by the general user community as they try to scale up to Big Data analytics. Indeed, it is likely that more users will seek to perform remote web-based analysis precisely to avoid the issues, or the need to reprogram around them. We will discuss approaches to mitigating the limitations and the implications for data systems serving the user communities that try to scale up their current techniques to analyze Big Data.

  18. Measuring MERCI: exploring data mining techniques for examining the neurologic outcomes of stroke patients undergoing endo-vascular therapy at Erlanger Southeast Stroke Center.

    PubMed

    McNabb, Matthew; Cao, Yu; Devlin, Thomas; Baxter, Blaise; Thornton, Albert

    2012-01-01

    Mechanical Embolus Removal in Cerebral Ischemia (MERCI) has been supported by medical trials as an improved method of treating ischemic stroke past the safe window of time for administering clot-busting drugs, and was released for medical use in 2004. The importance of analyzing real-world data collected from MERCI clinical trials is key to providing insights on the effectiveness of MERCI. Most of the existing data analysis on MERCI results has thus far employed conventional statistical analysis techniques. To the best of our knowledge, advanced data analytics and data mining techniques have not yet been systematically applied. To address the issue in this thesis, we conduct a comprehensive study on employing state of the art machine learning algorithms to generate prediction criteria for the outcome of MERCI patients. Specifically, we investigate the issue of how to choose the most significant attributes of a data set with limited instance examples. We propose a few search algorithms to identify the significant attributes, followed by a thorough performance analysis for each algorithm. Finally, we apply our proposed approach to the real-world, de-identified patient data provided by Erlanger Southeast Regional Stroke Center, Chattanooga, TN. Our experimental results have demonstrated that our proposed approach performs well.

  19. User guide for the digital control system of the NASA/Langley Research Center's 13-inch Magnetic Suspension and Balance System

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1987-01-01

    The technical background to the development of the digital control system of the NASA/Langley Research Center's 13 inch Magnetic Supension and Balance Systen (MSBS) is reviewed. The implementation of traditional MSBS control algorithms in digital form is examined. Extensive details of the 13-inch MSBS digital controller and related hardware are given, together with the introductory instructions for systems operators. Full listings of software are included in the Appendices.

  20. Automatic control of solar power plants

    NASA Astrophysics Data System (ADS)

    Ermakov, V. S.; Dubilovich, V. M.

    1982-02-01

    The automatic control of the heliostat field of a 200-MW solar power plant is discussed. The advantages of the decentralized control principle with the solution of a number of individual problems in a single control center are emphasized. The basic requirements on heliostat construction are examined, and possible functional schemes for the automatic control of a heliostat field are described. It is proposed that groups of heliostats can be controlled from a single center and on the basis of a single algorithm.

  1. A low-power photovoltaic system with energy storage for radio communications: Description and design methodology

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.; Chapman, P. D.; Lewison, A. H.

    1982-01-01

    A low power photovoltaic system was constructed with approximately 500 amp hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous nonsun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.

  2. Data-driven approach of CUSUM algorithm in temporal aberrant event detection using interactive web applications.

    PubMed

    Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Alex; Badiani, Tina; Johnson, Ian

    2016-06-27

    In 2014/2015, Public Health Ontario developed disease-specific, cumulative sum (CUSUM)-based statistical algorithms for detecting aberrant increases in reportable infectious disease incidence in Ontario. The objective of this study was to determine whether the prospective application of these CUSUM algorithms, based on historical patterns, have improved specificity and sensitivity compared to the currently used Early Aberration Reporting System (EARS) algorithm, developed by the US Centers for Disease Control and Prevention. A total of seven algorithms were developed for the following diseases: cyclosporiasis, giardiasis, influenza (one each for type A and type B), mumps, pertussis, invasive pneumococcal disease. Historical data were used as baseline to assess known outbreaks. Regression models were used to model seasonality and CUSUM was applied to the difference between observed and expected counts. An interactive web application was developed allowing program staff to directly interact with data and tune the parameters of CUSUM algorithms using their expertise on the epidemiology of each disease. Using these parameters, a CUSUM detection system was applied prospectively and the results were compared to the outputs generated by EARS. The outcome was the detection of outbreaks, or the start of a known seasonal increase and predicting the peak in activity. The CUSUM algorithms detected provincial outbreaks earlier than the EARS algorithm, identified the start of the influenza season in advance of traditional methods, and had fewer false positive alerts. Additionally, having staff involved in the creation of the algorithms improved their understanding of the algorithms and improved use in practice. Using interactive web-based technology to tune CUSUM improved the sensitivity and specificity of detection algorithms.

  3. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  4. Ab initio molecular simulations with numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias

    2009-11-01

    We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.

  5. Optimizing urine drug testing for monitoring medication compliance in pain management.

    PubMed

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were <1% so we eliminated this testing from our panel. We also lowered the screening cutoff for cocaine to meet the clinical needs of the pain management center. In addition, we changed our testing algorithm for 6-acetylmorphine, benzodiazepines, and methadone. For example, due the high rate of false negative results using our immunoassay-based benzodiazepine screen, we removed the screening portion of the algorithm and now perform benzodiazepine confirmation up front in all specimens by liquid chromatography-tandem mass spectrometry. Conducting an interdisciplinary quality improvement project allowed us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  6. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  7. Assessment of the French National Health Insurance Information System as a tool for epidemiological surveillance of malaria.

    PubMed

    Delon, François; Mayet, Aurélie; Thellier, Marc; Kendjo, Eric; Michel, Rémy; Ollivier, Lénaïck; Chatellier, Gilles; Desjeux, Guillaume

    2017-05-01

    Epidemiological surveillance of malaria in France is based on a hospital laboratory sentinel surveillance network. There is no comprehensive population surveillance. The objective of this study was to assess the ability of the French National Health Insurance Information System to support nationwide malaria surveillance in continental France. A case identification algorithm was built in a 2-step process. First, inclusion rules giving priority to sensitivity were defined. Then, based on data description, exclusion rules to increase specificity were applied. To validate our results, we compared them to data from the French National Reference Center for Malaria on case counts, distribution within subgroups, and disease onset date trends. We built a reusable automatized tool. From July 1, 2013, to June 30, 2014, we identified 4077 incident malaria cases that occurred in continental France. Our algorithm provided data for hospitalized patients, patients treated by private physicians, and outpatients for the entire population. Our results were similar to those of the National Reference Center for Malaria for each of the outcome criteria. We provided a reliable algorithm for implementing epidemiological surveillance of malaria based on the French National Health Insurance Information System. Our method allowed us to work on the entire population living in continental France, including subpopulations poorly covered by existing surveillance methods. Traditional epidemiological surveillance and the approach presented in this paper are complementary, but a formal validation framework for case identification algorithms is necessary. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Materials Discovery | Photovoltaic Research | NREL

    Science.gov Websites

    and specialized analysis algorithms. The Center for Next Generation of Materials by Design (CNGMD) is , incorporating metastable materials into predictive design, and developing theory to guide materials synthesis design, accuracy and relevance, metastability, and synthesizability-to make computational materials

  9. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  10. [Digital signal processing of a novel neuron discharge model stimulation strategy for cochlear implants].

    PubMed

    Yang, Yiwei; Xu, Yuejin; Miu, Jichang; Zhou, Linghong; Xiao, Zhongju

    2012-10-01

    To apply the classic leakage integrate-and-fire models, based on the mechanism of the generation of physiological auditory stimulation, in the information processing coding of cochlear implants to improve the auditory result. The results of algorithm simulation in digital signal processor (DSP) were imported into Matlab for a comparative analysis. Compared with CIS coding, the algorithm of membrane potential integrate-and-fire (MPIF) allowed more natural pulse discharge in a pseudo-random manner to better fit the physiological structures. The MPIF algorithm can effectively solve the problem of the dynamic structure of the delivered auditory information sequence issued in the auditory center and allowed integration of the stimulating pulses and time coding to ensure the coherence and relevance of the stimulating pulse time.

  11. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  12. Multitarget-multisensor management for decentralized sensor networks

    NASA Astrophysics Data System (ADS)

    Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Hernandez, M. L.

    2006-05-01

    In this paper, we consider the problem of sensor resource management in decentralized tracking systems. Due to the availability of cheap sensors, it is possible to use a large number of sensors and a few fusion centers (FCs) to monitor a large surveillance region. Even though a large number of sensors are available, due to frequency, power and other physical limitations, only a few of them can be active at any one time. The problem is then to select sensor subsets that should be used by each FC at each sampling time in order to optimize the tracking performance subject to their operational constraints. In a recent paper, we proposed an algorithm to handle the above issues for joint detection and tracking, without using simplistic clustering techniques that are standard in the literature. However, in that paper, a hierarchical architecture with feedback at every sampling time was considered, and the sensor management was performed only at a central fusion center (CFC). However, in general, it is not possible to communicate with the CFC at every sampling time, and in many cases there may not even be a CFC. Sometimes, communication between CFC and local fusion centers might fail as well. Therefore performing sensor management only at the CFC is not viable in most networks. In this paper, we consider an architecture in which there is no CFC, each FC communicates only with the neighboring FCs, and communications are restricted. In this case, each FC has to decide which sensors are to be used by itself at each measurement time step. We propose an efficient algorithm to handle the above problem in real time. Simulation results illustrating the performance of the proposed algorithm are also presented.

  13. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  14. Hyperspectral anomaly detection using Sony PlayStation 3

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton; Romano, João; Sepulveda, Rene

    2009-05-01

    We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible. The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament, Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic approaches for applications requiring autonomous anomaly detection using passive sensors.

  15. Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.

    PubMed

    Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin

    2005-09-15

    We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.

  16. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  17. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  18. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  19. MITK global tractography

    NASA Astrophysics Data System (ADS)

    Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.

    2012-02-01

    Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.

  20. UWB Tracking Algorithms: AOA and TDOA

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, D.; Ngo, P.; Gross, J.; Refford, Melinda

    2006-01-01

    Ultra-Wideband (UWB) tracking prototype systems are currently under development at NASA Johnson Space Center for various applications on space exploration. For long range applications, a two-cluster Angle of Arrival (AOA) tracking method is employed for implementation of the tracking system; for close-in applications, a Time Difference of Arrival (TDOA) positioning methodology is exploited. Both AOA and TDOA are chosen to utilize the achievable fine time resolution of UWB signals. This talk presents a brief introduction to AOA and TDOA methodologies. The theoretical analysis of these two algorithms reveal the affecting parameters impact on the tracking resolution. For the AOA algorithm, simulations show that a tracking resolution less than 0.5% of the range can be achieved with the current achievable time resolution of UWB signals. For the TDOA algorithm used in close-in applications, simulations show that the (sub-inch) high tracking resolution is achieved with a chosen tracking baseline configuration. The analytical and simulated results provide insightful guidance for the UWB tracking system design.

  1. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  2. Healthwatch-2 System Overview

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Mosher, Marianne; Huff, Edward M.

    2004-01-01

    Healthwatch-2 (HW-2) is a research tool designed to facilitate the development and testing of in-flight health monitoring algorithms. HW-2 software is written in C/C++ and executes on an x86-based computer running the Linux operating system. The executive module has interfaces for collecting various signal data, such as vibration, torque, tachometer, and GPS. It is designed to perform in-flight time or frequency averaging based on specifications defined in a user-supplied configuration file. Averaged data are then passed to a user-supplied algorithm written as a Matlab function. This allows researchers a convenient method for testing in-flight algorithms. In addition to its in-flight capabilities, HW-2 software is also capable of reading archived flight data and processing it as if collected in-flight. This allows algorithms to be developed and tested in the laboratory before being flown. Currently HW-2 has passed its checkout phase and is collecting data on a Bell OH-58C helicopter operated by the U.S. Army at NASA Ames Research Center.

  3. Software-Implemented Fault Tolerance in Communications Systems

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1994-01-01

    Software-implemented fault tolerance (SIFT) is used in many computer-based command, control, and communications (C(3)) systems to provide the nearly continuous availability that they require. In the communications subsystem of Space Station Alpha, SIFT algorithms are used to detect and recover from failures in the data and command link between the Station and its ground support. The paper presents a review of these algorithms and discusses how such techniques can be applied to similar systems found in applications such as manufacturing control, military communications, and programmable devices such as pacemakers. With support from the Tracking and Communication Division of NASA's Johnson Space Center, researchers at the University of Wyoming are developing a testbed for evaluating the effectiveness of these algorithms prior to their deployment. This testbed will be capable of simulating a variety of C(3) system failures and recording the response of the Space Station SIFT algorithms to these failures. The design of this testbed and the applicability of the approach in other environments is described.

  4. Peak-Seeking Control For Reduced Fuel Consumption: Flight-Test Results For The Full-Scale Advanced Systems Testbed FA-18 Airplane

    NASA Technical Reports Server (NTRS)

    Brown, Nelson

    2013-01-01

    A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. This presentation also focuses on the design of the flight experiment and the practical challenges of conducting the experiment.

  5. Protein Structure Determination by Assembling Super-Secondary Structure Motifs Using Pseudocontact Shifts.

    PubMed

    Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas

    2017-03-07

    Computational and nuclear magnetic resonance hybrid approaches provide efficient tools for 3D structure determination of small proteins, but currently available algorithms struggle to perform with larger proteins. Here we demonstrate a new computational algorithm that assembles the 3D structure of a protein from its constituent super-secondary structural motifs (Smotifs) with the help of pseudocontact shift (PCS) restraints for backbone amide protons, where the PCSs are produced from different metal centers. The algorithm, DINGO-PCS (3D assembly of Individual Smotifs to Near-native Geometry as Orchestrated by PCSs), employs the PCSs to recognize, orient, and assemble the constituent Smotifs of the target protein without any other experimental data or computational force fields. Using a universal Smotif database, the DINGO-PCS algorithm exhaustively enumerates any given Smotif. We benchmarked the program against ten different protein targets ranging from 100 to 220 residues with different topologies. For nine of these targets, the method was able to identify near-native Smotifs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Post-launch validation of Multispectral Thermal Imager (MTI) data and algorithms

    NASA Astrophysics Data System (ADS)

    Garrett, Alfred J.; Kurzeja, Robert J.; O'Steen, B. L.; Parker, Matthew J.; Pendergast, Malcolm M.; Villa-Aleman, Eliel

    1999-10-01

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL) and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern U.S., Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V&V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in Nevada. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V&V.

  7. Superior Rhythm Discrimination With the SmartShock Technology Algorithm - Results of the Implantable Defibrillator With Enhanced Features and Settings for Reduction of Inaccurate Detection (DEFENSE) Trial.

    PubMed

    Oginosawa, Yasushi; Kohno, Ritsuko; Honda, Toshihiro; Kikuchi, Kan; Nozoe, Masatsugu; Uchida, Takayuki; Minamiguchi, Hitoshi; Sonoda, Koichiro; Ogawa, Masahiro; Ideguchi, Takeshi; Kizaki, Yoshihisa; Nakamura, Toshihiro; Oba, Kageyuki; Higa, Satoshi; Yoshida, Keiki; Tsunoda, Soichi; Fujino, Yoshihisa; Abe, Haruhiko

    2017-08-25

    Shocks delivered by implanted anti-tachyarrhythmia devices, even when appropriate, lower the quality of life and survival. The new SmartShock Technology ® (SST) discrimination algorithm was developed to prevent the delivery of inappropriate shock. This prospective, multicenter, observational study compared the rate of inaccurate detection of ventricular tachyarrhythmia using the SST vs. a conventional discrimination algorithm.Methods and Results:Recipients of implantable cardioverter defibrillators (ICD) or cardiac resynchronization therapy defibrillators (CRT-D) equipped with the SST algorithm were enrolled and followed up every 6 months. The tachycardia detection rate was set at ≥150 beats/min with the SST algorithm. The primary endpoint was the time to first inaccurate detection of ventricular tachycardia (VT) with conventional vs. the SST discrimination algorithm, up to 2 years of follow-up. Between March 2012 and September 2013, 185 patients (mean age, 64.0±14.9 years; men, 74%; secondary prevention indication, 49.5%) were enrolled at 14 Japanese medical centers. Inaccurate detection was observed in 32 patients (17.6%) with the conventional, vs. in 19 patients (10.4%) with the SST algorithm. SST significantly lowered the rate of inaccurate detection by dual chamber devices (HR, 0.50; 95% CI: 0.263-0.950; P=0.034). Compared with previous algorithms, the SST discrimination algorithm significantly lowered the rate of inaccurate detection of VT in recipients of dual-chamber ICD or CRT-D.

  8. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  9. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  10. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  11. Road screening and distribution route multi-objective robust optimization for hazardous materials based on neural network and genetic algorithm.

    PubMed

    Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang

    2018-01-01

    Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.

  12. The Griffiss Institute Summer Faculty Program

    DTIC Science & Technology

    2013-05-01

    can inherit the advantages of the static approach while overcoming its drawbacks . Our solution is centered on the following: (i) application-layer web...inverted pendulum balancing problem. In these challenging environments we show that our algorithm not only allows NEAT to scale to high-dimensional spaces

  13. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  14. Flight evaluation of a computer aided low-altitude helicopter flight guidance system

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.; Jones, Raymond D.; Clark, Raymond

    1993-01-01

    The Flight Systems Development branch of the U.S. Army's Avionics Research and Development Activity (AVRADA) and NASA Ames Research Center developed for flight testing a Computer Aided Low-Altitude Helicopter Flight (CALAHF) guidance system. The system includes a trajectory-generation algorithm which uses dynamic programming and a helmet-mounted display (HMD) presentation of a pathway-in-the-sky, a phantom aircraft, and flight-path vector/predictor guidance symbology. The trajectory-generation algorithm uses knowledge of the global mission requirements, a digital terrain map, aircraft performance capabilities, and precision navigation information to determine a trajectory between mission waypoints that seeks valleys to minimize threat exposure. This system was developed and evaluated through extensive use of piloted simulation and has demonstrated a 'pilot centered' concept of automated and integrated navigation and terrain mission planning flight guidance. This system has shown a significant improvement in pilot situational awareness, and mission effectiveness as well as a decrease in training and proficiency time required for a near terrain, nighttime, adverse weather system.

  15. Electric field mill network products to improve detection of the lightning hazard

    NASA Technical Reports Server (NTRS)

    Maier, Launa M.

    1987-01-01

    An electric field mill network has been used at Kennedy Space Center for over 10 years as part of the thunderstorm detection system. Several algorithms are currently available to improve the informational output of the electric field mill data. The charge distributions of roughly 50 percent of all lightning can be modeled as if they reduced the charged cloud by a point charge or a point dipole. Using these models, the spatial differences in the lightning induced electric field changes, and a least squares algorithm to obtain an optimum solution, the three-dimensional locations of the lightning charge centers can be located. During the lifetime of a thunderstorm, dynamically induced charging, modeled as a current source, can be located spatially with measurements of Maxwell current density. The electric field mills can be used to calculate the Maxwell current density at times when it is equal to the displacement current density. These improvements will produce more accurate assessments of the potential electrical activity, identify active cells, and forecast thunderstorm termination.

  16. BLAYER User Guide

    NASA Technical Reports Server (NTRS)

    Saunders, David A.; Prabhu, Dinesh K.

    2018-01-01

    A software utility employed for post-processing computational fluid dynamics solutions about atmospheric entry vehicles is described as a supplement to the documentation within the source code. This BLAYER application and its ancillary utilities are in the public domain at https://sourceforge.net/projects/cfdutilities/. BLAYER was developed at NASA Ames Research Center in support of the DPLR (Data Parallel Line Relaxation) flow solver. Its underlying algorithm has since been incorporated by others into the LAURA and US3D flow solvers at NASA Langley Research Center and the University of Minnesota respectively. The essence of the algorithm is to locate the boundary layer edge by seeking the peak curvature in a total enthalpy profile. Turning that insight into a practical tool suited to a wide range of possible profiles has led to a hybrid two-stage method. The traditional method-location of (say) 99.5% of free-stream total enthalpy-remains an option, though it may be less robust. Details are provided and multiple examples are presented.

  17. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  18. The MUSIC algorithm for impedance tomography of small inclusions from discrete data

    NASA Astrophysics Data System (ADS)

    Lechleiter, A.

    2015-09-01

    We consider a point-electrode model for electrical impedance tomography and show that current-to-voltage measurements from finitely many electrodes are sufficient to characterize the positions of a finite number of point-like inclusions. More precisely, we consider an asymptotic expansion with respect to the size of the small inclusions of the relative Neumann-to-Dirichlet operator in the framework of the point electrode model. This operator is naturally finite-dimensional and models difference measurements by finitely many small electrodes of the electric potential with and without the small inclusions. Moreover, its leading-order term explicitly characterizes the centers of the small inclusions if the (finite) number of point electrodes is large enough. This characterization is based on finite-dimensional test vectors and leads naturally to a MUSIC algorithm for imaging the inclusion centers. We show both the feasibility and limitations of this imaging technique via two-dimensional numerical experiments, considering in particular the influence of the number of point electrodes on the algorithm’s images.

  19. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  20. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major anticyclonic systems affect the Mediterranean basin throughout the year. Acknowledgement: This research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State. Some funding from the Australian Research Council is also acknowledged.

  1. CNES-NASA Studies of the Mars Sample Return Orbiter Aerocapture Phase

    NASA Technical Reports Server (NTRS)

    Fraysse, H.; Powell, R.; Rousseau, S.; Striepe, S.

    2000-01-01

    A Mars Sample Return (MSR) mission has been proposed as a joint CNES (Centre National d'Etudes Spatiales) and NASA effort in the ongoing Mars Exploration Program. The MSR mission is designed to return the first samples of Martian soil to Earth. The primary elements of the mission are a lander, rover, ascent vehicle, orbiter, and an Earth entry vehicle. The Orbiter has been allocated only 2700 kg on the launch phase to perform its part of the mission. This mass restriction has led to the decision to use an aerocapture maneuver at Mars for the orbiter. Aerocapture replaces the initial propulsive capture maneuver with a single atmospheric pass. This atmospheric pass will result in the proper apoapsis, but a periapsis raise maneuver is required at the first apoapsis. The use of aerocapture reduces the total mass requirement by approx. 45% for the same payload. This mission will be the first to use the aerocapture technique. Because the spacecraft is flying through the atmosphere, guidance algorithms must be developed that will autonomously provide the proper commands to reach the desired orbit while not violating any of the design parameters (e.g. maximum deceleration, maximum heating rate, etc.). The guidance algorithm must be robust enough to account for uncertainties in delivery states, atmospheric conditions, mass properties, control system performance, and aerodynamics. To study this very critical phase of the mission, a joint CNES-NASA technical working group has been formed. This group is composed of atmospheric trajectory specialists from CNES, NASA Langley Research Center and NASA Johnson Space Center. This working group is tasked with developing and testing guidance algorithms, as well as cross-validating CNES and NASA flight simulators for the Mars atmospheric entry phase of this mission. The final result will be a recommendation to CNES on the algorithm to use, and an evaluation of the flight risks associated with the algorithm. This paper will describe the aerocapture phase of the MSR mission, the main principles of the guidance algorithms that are under development, the atmospheric entry simulators developed for the evaluations, the process for the evaluations, and preliminary results from the evaluations.

  2. Algorithm for Wavefront Sensing Using an Extended Scene

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph; Ohara, Catherine

    2008-01-01

    A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.

  3. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  4. Overview of NASA's MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) snow-cover Earth System Data Records

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.; Roman, Miguel O.

    2017-01-01

    Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6) and VIIRS Collection 1 (C1) represent the state-of-the-art global snow cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map.The increased data content allows flexibility in using the datasets for specific regions and end-user applications.Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16C year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms.These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.

  5. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    NASA Technical Reports Server (NTRS)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  6. A rotorcraft flight database for validation of vision-based ranging algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1992-01-01

    A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.

  7. Discrimination of herbicide-resistant kochia with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Nugent, Paul W.; Shaw, Joseph A.; Jha, Prashant; Scherrer, Bryan; Donelick, Andrew; Kumar, Vipan

    2018-01-01

    A hyperspectral imager was used to differentiate herbicide-resistant versus herbicide-susceptible biotypes of the agronomic weed kochia, in different crops in the field at the Southern Agricultural Research Center in Huntley, Montana. Controlled greenhouse experiments showed that enough information was captured by the imager to classify plants as either a crop, herbicide-susceptible or herbicide-resistant kochia. The current analysis is developing an algorithm that will work in more uncontrolled outdoor situations. In overcast conditions, the algorithm correctly identified dicamba-resistant kochia, glyphosate-resistant kochia, and glyphosate- and dicamba-susceptible kochia with 67%, 76%, and 80% success rates, respectively.

  8. The CCSDS Lossless Data Compression Algorithm for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Day, John H. (Technical Monitor)

    2001-01-01

    In the late 80's, when the author started working at the Goddard Space Flight Center (GSFC) for the National Aeronautics and Space Administration (NASA), several scientists there were in the process of formulating the next generation of Earth viewing science instruments, the Moderate Resolution Imaging Spectroradiometer (MODIS). The instrument would have over thirty spectral bands and would transmit enormous data through the communications channel. This was when the author was assigned the task of investigating lossless compression algorithms for space implementation to compress science data in order to reduce the requirement on bandwidth and storage.

  9. GPS Modeling and Analysis. Summary of Research: GPS Satellite Axial Ratio Predictions

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; Reeh, Lisa

    2002-01-01

    This report outlines the algorithms developed at the Colorado Center for Astrodynamics Research to model yaw and predict the axial ratio as measured from a ground station. The algorithms are implemented in a collection of Matlab functions and scripts that read certain user input, such as ground station coordinates, the UTC time, and the desired GPS (Global Positioning System) satellites, and compute the above-mentioned parameters. The position information for the GPS satellites is obtained from Yuma almanac files corresponding to the prescribed date. The results are displayed graphically through time histories and azimuth-elevation plots.

  10. Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data.

    PubMed

    Horwitz, Leora I; Grady, Jacqueline N; Cohen, Dorothy B; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi K; Masica, Andrew L; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G; Ross, Joseph S; Drye, Elizabeth E; Krumholz, Harlan M; Bernheim, Susannah M

    2015-10-01

    It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. To develop an algorithm to identify planned readmissions, describe its performance characteristics, and identify improvements. Consensus-driven algorithm development and chart review validation study at 7 acute-care hospitals in 2 health systems. For development, all discharges qualifying for the publicly reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall, 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6%, and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the 2 most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. © 2015 Society of Hospital Medicine.

  11. Comparison of a single-view and a double-view aerosol optical depth retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Henderson, Bradley G.; Chylek, Petr

    2003-11-01

    We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.

  12. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  13. An effective hybrid self-adapting differential evolution algorithm for the joint replenishment and location-inventory problem in a three-level supply chain.

    PubMed

    Wang, Lin; Qu, Hui; Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem.

  14. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  15. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  16. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  17. AntiClustal: Multiple Sequence Alignment by antipole clustering and linear approximate 1-median computation.

    PubMed

    Di Pietro, C; Di Pietro, V; Emmanuele, G; Ferro, A; Maugeri, T; Modica, E; Pigola, G; Pulvirenti, A; Purrello, M; Ragusa, M; Scalia, M; Shasha, D; Travali, S; Zimmitti, V

    2003-01-01

    In this paper we present a new Multiple Sequence Alignment (MSA) algorithm called AntiClusAl. The method makes use of the commonly use idea of aligning homologous sequences belonging to classes generated by some clustering algorithm, and then continue the alignment process ina bottom-up way along a suitable tree structure. The final result is then read at the root of the tree. Multiple sequence alignment in each cluster makes use of the progressive alignment with the 1-median (center) of the cluster. The 1-median of set S of sequences is the element of S which minimizes the average distance from any other sequence in S. Its exact computation requires quadratic time. The basic idea of our proposed algorithm is to make use of a simple and natural algorithmic technique based on randomized tournaments which has been successfully applied to large size search problems in general metric spaces. In particular a clustering algorithm called Antipole tree and an approximate linear 1-median computation are used. Our algorithm compared with Clustal W, a widely used tool to MSA, shows a better running time results with fully comparable alignment quality. A successful biological application showing high aminoacid conservation during evolution of Xenopus laevis SOD2 is also cited.

  18. Prediction of protein long-range contacts using an ensemble of genetic algorithm classifiers with sequence profile centers.

    PubMed

    Chen, Peng; Li, Jinyan

    2010-05-17

    Prediction of long-range inter-residue contacts is an important topic in bioinformatics research. It is helpful for determining protein structures, understanding protein foldings, and therefore advancing the annotation of protein functions. In this paper, we propose a novel ensemble of genetic algorithm classifiers (GaCs) to address the long-range contact prediction problem. Our method is based on the key idea called sequence profile centers (SPCs). Each SPC is the average sequence profiles of residue pairs belonging to the same contact class or non-contact class. GaCs train on multiple but different pairs of long-range contact data (positive data) and long-range non-contact data (negative data). The negative data sets, having roughly the same sizes as the positive ones, are constructed by random sampling over the original imbalanced negative data. As a result, about 21.5% long-range contacts are correctly predicted. We also found that the ensemble of GaCs indeed makes an accuracy improvement by around 5.6% over the single GaC. Classifiers with the use of sequence profile centers may advance the long-range contact prediction. In line with this approach, key structural features in proteins would be determined with high efficiency and accuracy.

  19. Multi-sensor Efforts to Detect Oil slicks at the Ocean Surface — An Applied Science Project

    NASA Astrophysics Data System (ADS)

    Gallegos, S. C.; Pichel, W. G.; Hu, Y.; Garcia-Pineda, O. G.; Kukhtarev, N.; Lewis, D.

    2012-12-01

    In 2008, The Naval Research Laboratory at Stennis Space Center (NRL-SSC), NASA-Langley Space Center (LaRC) and NOAA Center for Satellite Applications and Research (STAR) with the support of the NASA Applied Science Program developed the concept for an operational oil detection system to support NOAA's mission of oil spill monitoring and response. Due to the current lack of a spaceborne sensor specifically designed for oil detection, this project relied on data and algorithms for the Synthetic Aperture Radar (SAR) and the Moderate Resolution Imaging Spectroradiometer (MODIS). NOAA/Satellite Analyses Branch (NOAA/SAB) was the transition point of those algorithms. Part of the research also included the evaluation of the Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP) capabilities for detection of surface and subsurface oil. In April 2010, while conducting the research in the Gulf of Mexico, the Deep Water Horizon (DWH) oil spill, the largest accidental marine oil spill in the history of the petroleum industry impacted our area. This incident provided opportunities to expand our efforts to the field, the laboratory, and to the data of other sensors such as the Hyperspectral Imager of the Coastal Zone (HICO). We summarize the results of our initial effort and describe in detail those efforts carried out during the DWH oil spill.

  20. Locating Structural Centers: A Density-Based Clustering Method for Community Detection

    PubMed Central

    Liu, Gongshen; Li, Jianhua; Nees, Jan P.

    2017-01-01

    Uncovering underlying community structures in complex networks has received considerable attention because of its importance in understanding structural attributes and group characteristics of networks. The algorithmic identification of such structures is a significant challenge. Local expanding methods have proven to be efficient and effective in community detection, but most methods are sensitive to initial seeds and built-in parameters. In this paper, we present a local expansion method by density-based clustering, which aims to uncover the intrinsic network communities by locating the structural centers of communities based on a proposed structural centrality. The structural centrality takes into account local density of nodes and relative distance between nodes. The proposed algorithm expands a community from the structural center to the border with a single local search procedure. The local expanding procedure follows a heuristic strategy as allowing it to find complete community structures. Moreover, it can identify different node roles (cores and outliers) in communities by defining a border region. The experiments involve both on real-world and artificial networks, and give a comparison view to evaluate the proposed method. The result of these experiments shows that the proposed method performs more efficiently with a comparative clustering performance than current state of the art methods. PMID:28046030

  1. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things

    PubMed Central

    Yi, Meng; Chen, Qingkui; Xiong, Neal N.

    2016-01-01

    This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878

  2. Comparison and validation of injury risk classifiers for advanced automated crash notification systems.

    PubMed

    Kusano, Kristofer; Gabler, Hampton C

    2014-01-01

    The odds of death for a seriously injured crash victim are drastically reduced if he or she received care at a trauma center. Advanced automated crash notification (AACN) algorithms are postcrash safety systems that use data measured by the vehicles during the crash to predict the likelihood of occupants being seriously injured. The accuracy of these models are crucial to the success of an AACN. The objective of this study was to compare the predictive performance of competing injury risk models and algorithms: logistic regression, random forest, AdaBoost, naïve Bayes, support vector machine, and classification k-nearest neighbors. This study compared machine learning algorithms to the widely adopted logistic regression modeling approach. Machine learning algorithms have not been commonly studied in the motor vehicle injury literature. Machine learning algorithms may have higher predictive power than logistic regression, despite the drawback of lacking the ability to perform statistical inference. To evaluate the performance of these algorithms, data on 16,398 vehicles involved in non-rollover collisions were extracted from the NASS-CDS. Vehicles with any occupants having an Injury Severity Score (ISS) of 15 or greater were defined as those requiring victims to be treated at a trauma center. The performance of each model was evaluated using cross-validation. Cross-validation assesses how a model will perform in the future given new data not used for model training. The crash ΔV (change in velocity during the crash), damage side (struck side of the vehicle), seat belt use, vehicle body type, number of events, occupant age, and occupant sex were used as predictors in each model. Logistic regression slightly outperformed the machine learning algorithms based on sensitivity and specificity of the models. Previous studies on AACN risk curves used the same data to train and test the power of the models and as a result had higher sensitivity compared to the cross-validated results from this study. Future studies should account for future data; for example, by using cross-validation or risk presenting optimistic predictions of field performance. Past algorithms have been criticized for relying on age and sex, being difficult to measure by vehicle sensors, and inaccuracies in classifying damage side. The models with accurate damage side and including age/sex did outperform models with less accurate damage side and without age/sex, but the differences were small, suggesting that the success of AACN is not reliant on these predictors.

  3. Groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1989-01-01

    By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.

  4. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger

    2007-12-01

    Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.

  5. Building of Reusable Reverse Logistics Model and its Optimization Considering the Decision of Backorder or Next Arrival of Goods

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol

    This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.

  6. Development of the L-1011 four-dimensional flight management system

    NASA Technical Reports Server (NTRS)

    Lee, H. P.; Leffler, M. F.

    1984-01-01

    The development of 4-D guidance and control algorithms for the L-1011 Flight Management System is described. Four-D Flight Management is a concept by which an aircraft's flight is optimized along the 3-D path within the constraints of today's ATC environment, while its arrival time is controlled to fit into the air traffic flow without incurring or causing delays. The methods developed herein were designed to be compatible with the time-based en route metering techniques that were recently developed by the Dallas/Fort Worth and Denver Air Route Traffic Control Centers. The ensuing development of the 4-D guidance algorithms, the necessary control laws and the operational procedures are discussed. Results of computer simulation evaluation of the guidance algorithms and control laws are presented, along with a description of the software development procedures utilized.

  7. Comparing Methods for Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Lai, Chok Fung

    2011-01-01

    This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.

  8. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  9. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  10. Development of Algorithms for Control of Humidity in Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.

    2003-01-01

    Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.

  11. Speech enhancement based on modified phase-opponency detectors

    NASA Astrophysics Data System (ADS)

    Deshmukh, Om D.; Espy-Wilson, Carol Y.

    2005-09-01

    A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.

  12. A multi-dimensional, energy- and charge-conserving, nonlinearly implicit, electromagnetic Vlasov–Darwin particle-in-cell algorithm

    DOE PAGES

    Chen, G.; Chacón, L.

    2015-08-11

    For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  14. Global Climate Monitoring with the EOS PM-Platform's Advanced Microwave Scanning Radiometer (AMSR-E)

    NASA Technical Reports Server (NTRS)

    Spencer, Roy W.

    2002-01-01

    The Advanced Microwave Scanning 2 Radiometer (AMSR-E) is being built by NASDA to fly on NASA's PM Platform (now called Aqua) in December 2000. This is in addition to a copy of AMSR that will be launched on Japan's ADEOS-II satellite in 2001. The AMSRs improve upon the window frequency radiometer heritage of the SSM/I and SMMR instruments. Major improvements over those instruments include channels spanning the 6.9 GHz to 89 GHz frequency range, and higher spatial resolution from a 1.6 m reflector (AMSR-E) and 2.0 m reflector (ADEOS-II AMSR). The ADEOS-II AMSR also will have 50.3 and 52.8 GHz channels, providing sensitivity to lower tropospheric temperature. NASA funds an AMSR-E Science Team to provide algorithms for the routine production of a number of standard geophysical products. These products will be generated by the AMSR-E Science Investigator-led Processing System (SIPS) at the Global Hydrology Resource Center (GHRC) in Huntsville, Alabama. While there is a separate NASDA-sponsored activity to develop algorithms and produce products from AMSR, as well as a Joint (NASDA-NASA) AMSR Science Team 3 activity, here I will review only the AMSR-E Team's algorithms and how they benefit from the new capabilities that AMSR-E will provide. The US Team's products will be archived at the National Snow and Ice Data Center (NSIDC).

  15. Evaluation of rapid HIV test kits on whole blood and development of rapid testing algorithm for voluntary testing and counseling centers in Ethiopia.

    PubMed

    Tegbaru, Belete; Messele, Tsehaynesh; Wolday, Dawit; Meles, PhD Hailu; Tesema, Desalegn; Birhanu, Hiwot; Tesfaye, Girma; Bond, Kyle B; Martin, Robert; Rayfield, Mark A; Wuhib, Tadesse; Fekadu, Makonnen

    2004-10-01

    Five simple and rapid HIV antibody detection assays viz. Determine, Capillus, Oraquick, Unigold and Hemastrip were evaluated to examine their performance and to develop an alternative rapid test based testing algorithm for voluntary counseling and testing (VCT) in Ethiopia. All the kits were tested on whole blood, plasma and serum. The evaluation had three phases: Primary lab review, piloting at point of service and implementation. This report includes the results of the first two phases. A total of 2,693 specimens (both whole blood and plasma) were included in the evaluation. Results were compared to double Enzyme Linked Immuno-Sorbent Assay (ELISA) system. Discordant EIA results were resolved using Western Blot. The assays had very good sensitivities and specificities, 99-100%, at the two different phases of the evaluation. A 98-100% result agreement was obtained from those tested at VCT centers and National Referral Laboratory for AIDS (NRLA), in the quality control phase of the evaluation. A testing strategy yielding 100% [95% CI; 98.9-100.0] sensitivity was achieved by the sequential use of the three rapid test kits. Direct cost comparison showed serial testing algorithm reduces the cost of testing by over 30% compared to parallel testing in the current situation. Determine, Capillus/Oraquick (presence/absence of frefrigeration) and Unigold were recommended as screening, confirmation and tiebreaker tests, respectively.

  16. NASA Remote Sensing Data in Earth Sciences: Processing, Archiving, Distribution, Applications at the GES DISC

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.

    2005-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.

  17. An algorithm for enhanced formation flying of satellites in low earth orbit

    NASA Astrophysics Data System (ADS)

    Folta, David C.; Quinn, David A.

    1998-01-01

    With scientific objectives for Earth observation programs becoming more ambitious and spacecraft becoming more autonomous, the need for innovative technical approaches on the feasibility of achieving and maintaining formations of spacecraft has come to the forefront. The trend to develop small low-cost spacecraft has led many scientists to recognize the advantage of flying several spacecraft in formation to achieve the correlated instrument measurements formerly possible only by flying many instruments on a single large platform. Yet, formation flying imposes additional complications on orbit maintenance, especially when each spacecraft has its own orbit requirements. However, advances in automation and technology proposed by the Goddard Space Flight Center (GSFC) allow more of the burden in maneuver planning and execution to be placed onboard the spacecraft, mitigating some of the associated operational concerns. The purpose of this paper is to present GSFC's Guidance, Navigation, and Control Center's (GNCC) algorithm for Formation Flying of the low earth orbiting spacecraft that is part of the New Millennium Program (NMP). This system will be implemented as a close-loop flight code onboard the NMP Earth Orbiter-1 (EO-1) spacecraft. Results of this development can be used to determine the appropriateness of formation flying for a particular case as well as operational impacts. Simulation results using this algorithm integrated in an autonomous `fuzzy logic' control system called AutoCon™ are presented.

  18. Exploring a Physically Based Tool for Lightning Cessation: A Preliminary Study

    NASA Technical Reports Server (NTRS)

    Schultz, Elise V.; Petersen, Walter a.; Carey, Lawrence D.; Deierling, Wiebke

    2010-01-01

    The University of Alabama in Huntsville (UA Huntsville) and NASA's Marshall Space Flight Center are collaborating with the 45th Weather Squadron (45WS) at Cape Canaveral Air Force Station (CCAFS) to enable improved nowcasting of lightning cessation. The project centers on use of dual-polarimetric radar capabilities, and in particular, the new C-band dual-polarimetric weather radar acquired by the 45WS. Special emphasis is placed on the development of a physically based operational algorithm to predict lightning cessation. While previous studies have developed statistically based lightning cessation algorithms, we believe that dual-polarimetric radar variables offer the possibility to improve existing algorithms through the inclusion of physically meaningful trends reflecting interactions between in-cloud electric fields and microphysics. Specifically, decades of polarimetric radar research using propagation differential phase has demonstrated the presence of distinct phase and ice crystal alignment signatures in the presence of strong electric fields associated with lightning. One question yet to be addressed is: To what extent can these ice-crystal alignment signatures be used to nowcast the cessation of lightning activity in a given storm? Accordingly, data from the UA Huntsville Advanced Radar for Meteorological and Operational Research (ARMOR) along with the North Alabama Lightning Mapping Array are used in this study to investigate the radar signatures present before and after lightning cessation. A summary of preliminary results will be presented.

  19. Exploring a Physically Based Tool for Lightning Cessation: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Schultz, Elsie V.; Petersen, Walter A.; Carey, Lawrence D.; Buechler, Dennis E.; Gatlin, Patrick N.

    2010-01-01

    The University of Alabama in Huntsville (UAHuntsville) and NASA s Marshall Space Flight Center are collaborating with the 45th Weather Squadron (45WS) at Cape Canaveral Air Force Station (CCAFS) to enable improved nowcasting of lightning cessation. The project centers on use of dual-polarimetric radar capabilities, and in particular, the new C-band dual-polarimetric weather radar acquired by the 45WS. Special emphasis is placed on the development of a physically based operational algorithm to predict lightning cessation. While previous studies have developed statistically based lightning cessation algorithms, we believe that dual-polarimetric radar variables offer the possibility to improve existing algorithms through the inclusion of physically meaningful trends reflecting interactions between in-cloud electric fields and microphysics. Specifically, decades of polarimetric radar research using propagation differential phase has demonstrated the presence of distinct phase and ice crystal alignment signatures in the presence of strong electric fields associated with lightning. One question yet to be addressed is: To what extent can these ice-crystal alignment signatures be used to nowcast the cessation of lightning activity in a given storm? Accordingly, data from the UAHuntsville Advanced Radar for Meteorological and Operational Research (ARMOR) along with the North Alabama Lightning Mapping Array are used in this study to investigate the radar signatures present before and after lightning cessation. A summary of preliminary results will be presented.

  20. Limited angle C-arm tomosynthesis reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  1. The First National Student Conference: NASA University Research Centers at Minority Institutions

    NASA Technical Reports Server (NTRS)

    Daso, Endwell O. (Editor); Mebane, Stacie (Editor)

    1997-01-01

    The conference includes contributions from 13 minority universities with NASA University Research Centers. Topics discussed include: leadership, survival strategies, life support systems, food systems, simulated hypergravity, chromium diffusion doping, radiation effects on dc-dc converters, metal oxide glasses, crystal growth of Bil3, science and communication on wheels, semiconductor thin films, numerical solution of random algebraic equations, fuzzy logic control, spatial resolution of satellite images, programming language development, nitric oxide in the thermosphere and mesosphere, high performance polyimides, crossover control in genetic algorithms, hyperthermal ion scattering, etc.

  2. Iterative User-Centered Design of a Next Generation Patient Monitoring System for Emergency Medical Response

    PubMed Central

    Gao, Tia; Kim, Matthew I.; White, David; Alm, Alexander M.

    2006-01-01

    We have developed a system for real-time patient monitoring during large-scale disasters. Our system is designed with scalable algorithms to monitor large numbers of patients, an intuitive interface to support the overwhelmed responders, and ad-hoc mesh networking capabilities to maintain connectivity to patients in the chaotic settings. This paper describes an iterative approach to user-centered design adopted to guide development of our system. This system is a part of the Advanced Health and Disaster Aid Network (AID-N) architecture. PMID:17238348

  3. Sensors Locate Radio Interference

    NASA Technical Reports Server (NTRS)

    2009-01-01

    After receiving a NASA Small Business Innovation Research (SBIR) contract from Kennedy Space Center, Soneticom Inc., based in West Melbourne, Florida, created algorithms for time difference of arrival and radio interferometry, which it used in its Lynx Location System (LLS) to locate electromagnetic interference that can disrupt radio communications. Soneticom is collaborating with the Federal Aviation Administration (FAA) to install and test the LLS at its field test center in New Jersey in preparation for deploying the LLS at commercial airports. The software collects data from each sensor in order to compute the location of the interfering emitter.

  4. GMTI Direction of Arrival Measurements from Multiple Phase Centers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.

    2015-03-01

    Ground Moving Target Indicator (GMTI) radar attempts to detect and locate targets with unknown motion. Very slow-moving targets are difficult to locate in the presence of surrounding clutter. This necessitates multiple antenna phase centers (or equivalent) to offer independent Direction of Arrival (DOA) measurements. DOA accuracy and precision generally remains dependent on target Signal-to-Noise Ratio (SNR), Clutter-toNoise Ratio (CNR), scene topography, interfering signals, and a number of antenna parameters. This is true even for adaptive techniques like Space-Time-AdaptiveProcessing (STAP) algorithms.

  5. Anomaly Detection and Life Pattern Estimation for the Elderly Based on Categorization of Accumulated Data

    NASA Astrophysics Data System (ADS)

    Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa

    2011-06-01

    We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.

  6. Global Soil Moisture from the Aquarius/SAC-D Satellite: Description and Initial Assessment

    NASA Technical Reports Server (NTRS)

    Bindlish, Rajat; Jackson, Thomas; Cosh, Michael; Zhao, Tianjie; O'Neil, Peggy

    2015-01-01

    Aquarius satellite observations over land offer a new resource for measuring soil moisture from space. Although Aquarius was designed for ocean salinity mapping, our objective in this investigation is to exploit the large amount of land observations that Aquarius acquires and extend the mission scope to include the retrieval of surface soil moisture. The soil moisture retrieval algorithm development focused on using only the radiometer data because of the extensive heritage of passive microwave retrieval of soil moisture. The single channel algorithm (SCA) was implemented using the Aquarius observations to estimate surface soil moisture. Aquarius radiometer observations from three beams (after bias/gain modification) along with the National Centers for Environmental Prediction model forecast surface temperatures were then used to retrieve soil moisture. Ancillary data inputs required for using the SCA are vegetation water content, land surface temperature, and several soil and vegetation parameters based on land cover classes. The resulting global spatial patterns of soil moisture were consistent with the precipitation climatology and with soil moisture from other satellite missions (Advanced Microwave Scanning Radiometer for the Earth Observing System and Soil Moisture Ocean Salinity). Initial assessments were performed using in situ observations from the U.S. Department of Agriculture Little Washita and Little River watershed soil moisture networks. Results showed good performance by the algorithm for these land surface conditions for the period of August 2011-June 2013 (rmse = 0.031 m(exp 3)/m(exp 3), Bias = -0.007 m(exp 3)/m(exp 3), and R = 0.855). This radiometer-only soil moisture product will serve as a baseline for continuing research on both active and combined passive-active soil moisture algorithms. The products are routinely available through the National Aeronautics and Space Administration data archive at the National Snow and Ice Data Center.

  7. Automatic Detection of Diabetic Retinopathy and Age-Related Macular Degeneration in Digital Fundus Images

    PubMed Central

    Barriga, E. Simon; Murray, Victor; Nemeth, Sheila; Crammer, Robert; Bauman, Wendall; Zamora, Gilberto; Pattichis, Marios S.; Soliz, Peter

    2011-01-01

    Purpose. To describe and evaluate the performance of an algorithm that automatically classifies images with pathologic features commonly found in diabetic retinopathy (DR) and age-related macular degeneration (AMD). Methods. Retinal digital photographs (N = 2247) of three fields of view (FOV) were obtained of the eyes of 822 patients at two centers: The Retina Institute of South Texas (RIST, San Antonio, TX) and The University of Texas Health Science Center San Antonio (UTHSCSA). Ground truth was provided for the presence of pathologic conditions, including microaneurysms, hemorrhages, exudates, neovascularization in the optic disc and elsewhere, drusen, abnormal pigmentation, and geographic atrophy. The algorithm was used to report on the presence or absence of disease. A detection threshold was applied to obtain different values of sensitivity and specificity with respect to ground truth and to construct a receiver operating characteristic (ROC) curve. Results. The system achieved an average area under the ROC curve (AUC) of 0.89 for detection of DR and of 0.92 for detection of sight-threatening DR (STDR). With a fixed specificity of 0.50, the system's sensitivity ranged from 0.92 for all DR cases to 1.00 for clinically significant macular edema (CSME). Conclusions. A computer-aided algorithm was trained to detect different types of pathologic retinal conditions. The cases of hard exudates within 1 disc diameter (DD) of the fovea (surrogate for CSME) were detected with very high accuracy (sensitivity = 1, specificity = 0.50), whereas mild nonproliferative DR was the most challenging condition (sensitivity= 0.92, specificity = 0.50). The algorithm was also tested on images with signs of AMD, achieving a performance of AUC of 0.84 (sensitivity = 0.94, specificity = 0.50). PMID:21666234

  8. Automatic detection of diabetic retinopathy and age-related macular degeneration in digital fundus images.

    PubMed

    Agurto, Carla; Barriga, E Simon; Murray, Victor; Nemeth, Sheila; Crammer, Robert; Bauman, Wendall; Zamora, Gilberto; Pattichis, Marios S; Soliz, Peter

    2011-07-29

    To describe and evaluate the performance of an algorithm that automatically classifies images with pathologic features commonly found in diabetic retinopathy (DR) and age-related macular degeneration (AMD). Retinal digital photographs (N = 2247) of three fields of view (FOV) were obtained of the eyes of 822 patients at two centers: The Retina Institute of South Texas (RIST, San Antonio, TX) and The University of Texas Health Science Center San Antonio (UTHSCSA). Ground truth was provided for the presence of pathologic conditions, including microaneurysms, hemorrhages, exudates, neovascularization in the optic disc and elsewhere, drusen, abnormal pigmentation, and geographic atrophy. The algorithm was used to report on the presence or absence of disease. A detection threshold was applied to obtain different values of sensitivity and specificity with respect to ground truth and to construct a receiver operating characteristic (ROC) curve. The system achieved an average area under the ROC curve (AUC) of 0.89 for detection of DR and of 0.92 for detection of sight-threatening DR (STDR). With a fixed specificity of 0.50, the system's sensitivity ranged from 0.92 for all DR cases to 1.00 for clinically significant macular edema (CSME). A computer-aided algorithm was trained to detect different types of pathologic retinal conditions. The cases of hard exudates within 1 disc diameter (DD) of the fovea (surrogate for CSME) were detected with very high accuracy (sensitivity = 1, specificity = 0.50), whereas mild nonproliferative DR was the most challenging condition (sensitivity = 0.92, specificity = 0.50). The algorithm was also tested on images with signs of AMD, achieving a performance of AUC of 0.84 (sensitivity = 0.94, specificity = 0.50).

  9. Ultrasonic data compression via parameter estimation.

    PubMed

    Cardoso, Guilherme; Saniie, Jafar

    2005-02-01

    Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.

  10. Development of a clinical algorithm for treating urethral strictures based on a large retrospective single-center cohort

    PubMed Central

    Tolkach, Yuri; Herrmann, Thomas; Merseburger, Axel; Burchardt, Martin; Wolters, Mathias; Huusmann, Stefan; Kramer, Mario; Kuczyk, Markus; Imkamp, Florian

    2017-01-01

    Aim: To analyze clinical data from male patients treated with urethrotomy and to develop a clinical decision algorithm. Materials and methods: Two large cohorts of male patients with urethral strictures were included in this retrospective study, historical (1985-1995, n=491) and modern cohorts (1996-2006, n=470). All patients were treated with repeated internal urethrotomies (up to 9 sessions). Clinical outcomes were analyzed and systemized as a clinical decision algorithm. Results: The overall recurrence rates after the first urethrotomy were 32.4% and 23% in the historical and modern cohorts, respectively. In many patients, the second procedure was also effective with the third procedure also feasible in selected patients. The strictures with a length ≤ 2 cm should be treated according to the initial length. In patients with strictures ≤ 1 cm, the second session could be recommended in all patients, but not with penile strictures, strictures related to transurethral operations or for patients who were 31-50 years of age. The third session could be effective in selected cases of idiopathic bulbar strictures. For strictures with a length of 1-2 cm, a second operation is possible for the solitary low-grade bulbar strictures, given that the age is > 50 years and the etiology is not post-transurethral resection of the prostate. For penile strictures that are 1-2 cm, urethrotomy could be attempted in solitary but not in high-grade strictures. Conclusions: We present data on the treatment of urethral strictures with urethrotomy from a single center. Based on the analysis, a clinical decision algorithm was suggested, which could be a reliable basis for everyday clinical practice. PMID:28529689

  11. Investigation on location dependent detectability in cone beam CT images with uniform and anatomical backgrounds

    NASA Astrophysics Data System (ADS)

    Han, Minah; Baek, Jongduk

    2017-03-01

    We investigate location dependent lesion detectability of cone beam computed tomography images for different background types (i.e., uniform and anatomical), image planes (i.e., transverse and longitudinal) and slice thicknesses. Anatomical backgrounds are generated using a power law spectrum of breast anatomy, 1/f3. Spherical object with a 5mm diameter is used as a signal. CT projection data are acquired by the forward projection of uniform and anatomical backgrounds with and without the signal. Then, projection data are reconstructed using the FDK algorithm. Detectability is evaluated by a channelized Hotelling observer with dense difference-of-Gaussian channels. For uniform background, off-centered images yield higher detectability than iso-centered images for the transverse plane, while for the longitudinal plane, detectability of iso-centered and off-centered images are similar. For anatomical background, off-centered images yield higher detectability for the transverse plane, while iso-centered images yield higher detectability for the longitudinal plane, when the slice thickness is smaller than 1.9mm. The optimal slice thickness is 3.8mm for all tasks, and the transverse plane at the off-center (iso-center and off-center) produces the highest detectability for uniform (anatomical) background.

  12. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  13. Nonlinear Fourier algorithm applied to solving equations of gravitational gas dynamics

    NASA Technical Reports Server (NTRS)

    Kolosov, B. I.

    1979-01-01

    Two dimensional gas flow problems were reduced to an approximating system of common differential equations, which were solved by a standard procedure of the Runge-Kutta type. A theorem of the existence of stationary conical shock waves with the cone vertex in the gravitating center was proved.

  14. Planning Paths Through Singularities in the Center of Mass Space

    NASA Technical Reports Server (NTRS)

    Doggett, William R.; Messner, William C.; Juang, Jer-Nan

    1998-01-01

    The center of mass space is a convenient space for planning motions that minimize reaction forces at the robot's base or optimize the stability of a mechanism. A unique problem associated with path planning in the center of mass space is the potential existence of multiple center of mass images for a single Cartesian obstacle, since a single center of mass location can correspond to multiple robot joint configurations. The existence of multiple images results in a need to either maintain multiple center of mass obstacle maps or to update obstacle locations when the robot passes through a singularity, such as when it moves from an elbow-up to an elbow-down configuration. To illustrate the concepts presented in this paper, a path is planned for an example task requiring motion through multiple center of mass space maps. The object of the path planning algorithm is to locate the bang- bang acceleration profile that minimizes the robot's base reactions in the presence of a single Cartesian obstacle. To simplify the presentation, only non-redundant robots are considered and joint non-linearities are neglected.

  15. Passive Microwave Algorithms for Sea Ice Concentration: A Comparison of Two Techniques

    NASA Technical Reports Server (NTRS)

    Comiso, Josefino C.; Cavalieri, Donald J.; Parkinson, Claire L.; Gloersen, Per

    1997-01-01

    The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux(e.g. heat, salt, and water) calculations to small change in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in but larger disagreements in the seasonal regions and in summer. In some ares in the Antarctic, the Bootstrap technique show ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies.

  16. A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139

  17. Reference set design for relational modeling of fuzzy systems

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    One of the keys to the successful relational modeling of fuzzy systems is the proper design of fuzzy reference sets. This has been discussed throughout the literature. In the frame of modeling a stochastic system, we analyze the problem numerically. First, we briefly describe the relational model and present the performance of the modeling in the most trivial case: the reference sets are triangle shaped. Next, we present a known fuzzy reference set generator algorithm (FRSGA) which is based on the fuzzy c-means (Fc-M) clustering algorithm. In the second section of this chapter we improve the previous FRSGA by adding a constraint to the Fc-M algorithm (modified Fc-M or MFc-M): two cluster centers are forced to coincide with the domain limits. This is needed to obtain properly shaped extreme linguistic reference values. We apply this algorithm to uniformly discretized domains of the variables involved. The fuzziness of the reference sets produced by both Fc-M and MFc-M is determined by a parameter, which in our experiments is modified iteratively. Each time, a new model is created and its performance analyzed. For certain algorithm parameter values both of these two algorithms have shortcomings. To eliminate the drawbacks of these two approaches, we develop a completely new generator algorithm for reference sets which we call Polyline. This algorithm and its performance are described in the last section. In all three cases, the modeling is performed for a variety of operators used in the inference engine and two defuzzification methods. Therefore our results depend neither on the system model order nor the experimental setup.

  18. Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network

    NASA Astrophysics Data System (ADS)

    Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.

  19. Initial Results from Radiometer and Polarized Radar-Based Icing Algorithms Compared to In-Situ Data

    NASA Technical Reports Server (NTRS)

    Serke, David; Reehorst, Andrew L.; King, Michael

    2015-01-01

    In early 2015, a field campaign was conducted at the NASA Glenn Research Center in Cleveland, Ohio, USA. The purpose of the campaign is to test several prototype algorithms meant to detect the location and severity of in-flight icing (or icing aloft, as opposed to ground icing) within the terminal airspace. Terminal airspace for this project is currently defined as within 25 kilometers horizontal distance of the terminal, which in this instance is Hopkins International Airport in Cleveland. Two new and improved algorithms that utilize ground-based remote sensing instrumentation have been developed and were operated during the field campaign. The first is the 'NASA Icing Remote Sensing System', or NIRSS. The second algorithm is the 'Radar Icing Algorithm', or RadIA. In addition to these algorithms, which were derived from ground-based remote sensors, in-situ icing measurements of the profiles of super-cooled liquid water (SLW) collected with vibrating wire sondes attached to weather balloons produced a comprehensive database for comparison. Key fields from the SLW-sondes include air temperature, humidity and liquid water content, cataloged by time and 3-D location. This work gives an overview of the NIRSS and RadIA products and results are compared to in-situ SLW-sonde data from one icing case study. The location and quantity of super-cooled liquid as measured by the in-situ probes provide a measure of the utility of these prototype hazard-sensing algorithms.

  20. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  1. Robust Segmentation of Overlapping Cells in Histopathology Specimens Using Parallel Seed Detection and Repulsive Level Set

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559

  2. Administrative Algorithms to identify Avascular necrosis of bone among patients undergoing upper or lower extremity magnetic resonance imaging: a validation study.

    PubMed

    Barbhaiya, Medha; Dong, Yan; Sparks, Jeffrey A; Losina, Elena; Costenbader, Karen H; Katz, Jeffrey N

    2017-06-19

    Studies of the epidemiology and outcomes of avascular necrosis (AVN) require accurate case-finding methods. The aim of this study was to evaluate performance characteristics of a claims-based algorithm designed to identify AVN cases in administrative data. Using a centralized patient registry from a US academic medical center, we identified all adults aged ≥18 years who underwent magnetic resonance imaging (MRI) of an upper/lower extremity joint during the 1.5 year study period. A radiologist report confirming AVN on MRI served as the gold standard. We examined the sensitivity, specificity, positive predictive value (PPV) and positive likelihood ratio (LR + ) of four algorithms (A-D) using International Classification of Diseases, 9th edition (ICD-9) codes for AVN. The algorithms ranged from least stringent (Algorithm A, requiring ≥1 ICD-9 code for AVN [733.4X]) to most stringent (Algorithm D, requiring ≥3 ICD-9 codes, each at least 30 days apart). Among 8200 patients who underwent MRI, 83 (1.0% [95% CI 0.78-1.22]) had AVN by gold standard. Algorithm A yielded the highest sensitivity (81.9%, 95% CI 72.0-89.5), with PPV of 66.0% (95% CI 56.0-75.1). The PPV of algorithm D increased to 82.2% (95% CI 67.9-92.0), although sensitivity decreased to 44.6% (95% CI 33.7-55.9). All four algorithms had specificities >99%. An algorithm that uses a single billing code to screen for AVN among those who had MRI has the highest sensitivity and is best suited for studies in which further medical record review confirming AVN is feasible. Algorithms using multiple billing codes are recommended for use in administrative databases when further AVN validation is not feasible.

  3. Algorithm for quantum-mechanical finite-nuclear-mass variational calculations of atoms with two p electrons using all-electron explicitly correlated Gaussian basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharkey, Keeper L.; Pavanello, Michele; Bubin, Sergiy

    2009-12-15

    A new algorithm for calculating the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for quantum-mechanical calculations of atoms with two p electrons or a single d electron have been derived and implemented. The Hamiltonian used in the approach was obtained by rigorously separating the center-of-mass motion and it explicitly depends on the finite mass of the nucleus. The approach was employed to perform test calculations on the isotopes of the carbon atom in their ground electronic states and to determine the finite-nuclear-mass corrections for these states.

  4. An Interface Tracking Algorithm for the Porous Medium Equation.

    DTIC Science & Technology

    1983-03-01

    equation (1.11). N [v n n 2(2) = n . AV k + wk---IY" 2] +l~ x A t K Ax E E 2+ VeTA i;- 2k1 n- o (nr+l) <k-<.(n+l) N [Av] [ n+l <Ax Z m(v ) I~+lIAxAt...RD-R127 685 AN INTERFACE TRACKING ALGORITHM FOR THE POROUS MEDIUM / EQURTION(U) WISCONSIN UNIV-MRDISON MATHEMATICS RESEARCH CENTER E DIBENEDETTO ET...RL. MAR 83 NRC-TSR-249 UNCLASSIFIED DAG29-88-C-8041 F/G 12/1i N E -EEonshhhhI EhhhMhhhhhhhhE mhhhhhhhhhhhhE mhhhhhhhhhhhhI IMhhhhhhhMhhhE

  5. Runway Incursion Prevention for General Aviation Operations

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Prinzel, Lawrence J., III

    2006-01-01

    A Runway Incursion Prevention System (RIPS) and additional incursion detection algorithm were adapted for general aviation operations and evaluated in a simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in the fall of 2005. RIPS has been designed to enhance surface situation awareness and provide cockpit alerts of potential runway conflicts in order to prevent runway incidents while also improving operational capability. The purpose of the study was to evaluate the airborne incursion detection algorithms and associated alerting and airport surface display concepts for general aviation operations. This paper gives an overview of the system, simulation study, and test results.

  6. Runway Incursion Prevention System for General Aviation Operations

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Prinzel III, Lawrence J.

    2006-01-01

    A Runway Incursion Prevention System (RIPS) and additional incursion detection algorithm were adapted for general aviation operations and evaluated in a simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in the fall of 2005. RIPS has been designed to enhance surface situation awareness and provide cockpit alerts of potential runway conflicts in order to prevent runway incidents while also improving operational capability. The purpose of the study was to evaluate the airborne incursion detection algorithms and associated alerting and airport surface display concepts for general aviation operations. This paper gives an overview of the system, simulation study, and test results.

  7. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  8. Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.

    2008-01-01

    bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).

  9. Functional Equivalence Acceptance Testing of FUN3D for Entry Descent and Landing Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Wood, William A.; Kleb, William L.; Alter, Stephen J.; Glass, Christopher E.; Padilla, Jose F.; Hammond, Dana P.; White, Jeffery A.

    2013-01-01

    The functional equivalence of the unstructured grid code FUN3D to the the structured grid code LAURA (Langley Aerothermodynamic Upwind Relaxation Algorithm) is documented for applications of interest to the Entry, Descent, and Landing (EDL) community. Examples from an existing suite of regression tests are used to demonstrate the functional equivalence, encompassing various thermochemical models and vehicle configurations. Algorithm modifications required for the node-based unstructured grid code (FUN3D) to reproduce functionality of the cell-centered structured code (LAURA) are also documented. Challenges associated with computation on tetrahedral grids versus computation on structured-grid derived hexahedral systems are discussed.

  10. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  11. A cryptologic based trust center for medical images.

    PubMed

    Wong, S T

    1996-01-01

    To investigate practical solutions that can integrate cryptographic techniques and picture archiving and communication systems (PACS) to improve the security of medical images. The PACS at the University of California San Francisco Medical Center consolidate images and associated data from various scanners into a centralized data archive and transmit them to remote display stations for review and consultation purposes. The purpose of this study is to investigate the model of a digital trust center that integrates cryptographic algorithms and protocols seamlessly into such a digital radiology environment to improve the security of medical images. The timing performance of encryption, decryption, and transmission of the cryptographic protocols over 81 volumetric PACS datasets has been measured. Lossless data compression is also applied before the encryption. The transmission performance is measured against three types of networks of different bandwidths: narrow-band Integrated Services Digital Network, Ethernet, and OC-3c Asynchronous Transfer Mode. The proposed digital trust center provides a cryptosystem solution to protect the confidentiality and to determine the authenticity of digital images in hospitals. The results of this study indicate that diagnostic images such as x-rays and magnetic resonance images could be routinely encrypted in PACS. However, applying encryption in teleradiology and PACS is a tradeoff between communications performance and security measures. Many people are uncertain about how to integrate cryptographic algorithms coherently into existing operations of the clinical enterprise. This paper describes a centralized cryptosystem architecture to ensure image data authenticity in a digital radiology department. The system performance has been evaluated in a hospital-integrated PACS environment.

  12. A cryptologic based trust center for medical images.

    PubMed Central

    Wong, S T

    1996-01-01

    OBJECTIVE: To investigate practical solutions that can integrate cryptographic techniques and picture archiving and communication systems (PACS) to improve the security of medical images. DESIGN: The PACS at the University of California San Francisco Medical Center consolidate images and associated data from various scanners into a centralized data archive and transmit them to remote display stations for review and consultation purposes. The purpose of this study is to investigate the model of a digital trust center that integrates cryptographic algorithms and protocols seamlessly into such a digital radiology environment to improve the security of medical images. MEASUREMENTS: The timing performance of encryption, decryption, and transmission of the cryptographic protocols over 81 volumetric PACS datasets has been measured. Lossless data compression is also applied before the encryption. The transmission performance is measured against three types of networks of different bandwidths: narrow-band Integrated Services Digital Network, Ethernet, and OC-3c Asynchronous Transfer Mode. RESULTS: The proposed digital trust center provides a cryptosystem solution to protect the confidentiality and to determine the authenticity of digital images in hospitals. The results of this study indicate that diagnostic images such as x-rays and magnetic resonance images could be routinely encrypted in PACS. However, applying encryption in teleradiology and PACS is a tradeoff between communications performance and security measures. CONCLUSION: Many people are uncertain about how to integrate cryptographic algorithms coherently into existing operations of the clinical enterprise. This paper describes a centralized cryptosystem architecture to ensure image data authenticity in a digital radiology department. The system performance has been evaluated in a hospital-integrated PACS environment. PMID:8930857

  13. Alterations in knee contact forces and centers in stance phase of gait: A detailed lower extremity musculoskeletal model.

    PubMed

    Marouane, H; Shirazi-Adl, A; Adouni, M

    2016-01-25

    Evaluation of contact forces-centers of the tibiofemoral joint in gait has crucial biomechanical and pathological consequences. It involves however difficulties and limitations in in vitro cadaver and in vivo imaging studies. The goal is to estimate total contact forces (CF) and location of contact centers (CC) on the medial and lateral plateaus using results computed by a validated finite element model simulating the stance phase of gait for normal as well as osteoarthritis, varus-valgus and posterior tibial slope altered subjects. Using foregoing contact results, six methods commonly used in the literature are also applied to estimate and compare locations of CC at 6 periods of stance phase (0%, 5%, 25%, 50%, 75% and 100%). TF joint contact forces are greater on the lateral plateau very early in stance and on the medial plateau thereafter during 25-100% stance periods. Large excursions in the location of CC (>17mm), especially on the medial plateau in the mediolateral direction, are computed. Various reported models estimate quite different CCs with much greater variations (~15mm) in the mediolateral direction on both plateaus. Compared to our accurately computed CCs taken as the gold standard, the centroid of contact area algorithm yielded least differences (except in the mediolateral direction on the medial plateau at ~5mm) whereas the contact point and weighted center of proximity algorithms resulted overall in greatest differences. Large movements in the location of CC should be considered when attempting to estimate TF compartmental contact forces in gait. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Approach to Managing MeaSURES Data at the GSFC Earth Science Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Vollmer, Bruce; Kempler, Steven J.; Ramapriyan, Hampapuram K.

    2009-01-01

    A major need stated by the NASA Earth science research strategy is to develop long-term, consistent, and calibrated data and products that are valid across multiple missions and satellite sensors. (NASA Solicitation for Making Earth System data records for Use in Research Environments (MEaSUREs) 2006-2010) Selected projects create long term records of a given parameter, called Earth Science Data Records (ESDRs), based on mature algorithms that bring together continuous multi-sensor data. ESDRs, associated algorithms, vetted by the appropriate community, are archived at a NASA affiliated data center for archive, stewardship, and distribution. See http://measures-projects.gsfc.nasa.gov/ for more details. This presentation describes the NASA GSFC Earth Science Data and Information Services Center (GES DISC) approach to managing the MEaSUREs ESDR datasets assigned to GES DISC. (Energy/water cycle related and atmospheric composition ESDRs) GES DISC will utilize its experience to integrate existing and proven reusable data management components to accommodate the new ESDRs. Components include a data archive system (S4PA), a data discovery and access system (Mirador), and various web services for data access. In addition, if determined to be useful to the user community, the Giovanni data exploration tool will be made available to ESDRs. The GES DISC data integration methodology to be used for the MEaSUREs datasets is presented. The goals of this presentation are to share an approach to ESDR integration, and initiate discussions amongst the data centers, data managers and data providers for the purpose of gaining efficiencies in data management for MEaSUREs projects.

  15. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  16. Successful treatment algorithm for evaluation of early pregnancy after in vitro fertilization.

    PubMed

    Cookingham, Lisa Marii; Goossen, Rachel P; Sparks, Amy E T; Van Voorhis, Bradley J; Duran, Eyup Hakan

    2015-10-01

    To evaluate a prospectively implemented clinical algorithm for early identification of ectopic pregnancy (EP) and heterotopic pregnancy (HP) after assisted reproductive technology (ART). Analysis of prospectively collected data. Academic medical center. All ART-conceived pregnancies between January 1995 and June 2013. Early pregnancy monitoring via clinical algorithm with all pregnancies screened using human chorionic gonadotropin (hCG) levels and reported symptoms, with subsequent early ultrasound evaluation if hCG levels were abnormal or if the patient reported pain or vaginal bleeding. Algorithmic efficiency for diagnosis of EP and HP and their subsequent clinical outcomes using a binary forward stepwise logistic regression model built to determine predictors of early pregnancy failure. Of the 3,904 pregnancies included, the incidence of EP and HP was 0.77% and 0.46%, respectively. The algorithm selected 96.7% and 83.3% of pregnancies diagnosed with EP and HP, respectively, for early ultrasound evaluation, leading to earlier treatment and resolution. Logistic regression revealed that first hCG, second hCG, hCG slope, age, pain, and vaginal bleeding were all independent predictors of early pregnancy failure after ART. Our clinical algorithm for early pregnancy evaluation after ART is effective for identification and prompt intervention of EP and HP without significant over- or misdiagnosis, and avoids the potential catastrophic morbidity associated with delayed diagnosis. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  17. Direct endoscopic video registration for sinus surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel; Taylor, Russell H.; Ishii, Masaru; Hager, Gregory D.

    2009-02-01

    Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.

  18. Static vs. dynamic decoding algorithms in a non-invasive body-machine interface

    PubMed Central

    Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B.; Abdollahi, Farnaz; Pedersen, Jessica; Mussa-Ivaldi, Ferdinando A.

    2017-01-01

    In this study, we consider a non-invasive body-machine interface that captures body motions still available to people with spinal cord injury (SCI) and maps them into a set of signals for controlling a computer user interface while engaging in a sustained level of mobility and exercise. We compare the effectiveness of two decoding algorithms that transform a high-dimensional body-signal vector into a lower dimensional control vector on 6 subjects with high-level SCI and 8 controls. One algorithm is based on a static map from current body signals to the current value of the control vector set through principal component analysis (PCA), the other on dynamic mapping a segment of body signals to the value and the temporal derivatives of the control vector set through a Kalman filter. SCI and control participants performed straighter and smoother cursor movements with the Kalman algorithm during center-out reaching, but their movements were faster and more precise when using PCA. All participants were able to use the BMI’s continuous, two-dimensional control to type on a virtual keyboard and play pong, and performance with both algorithms was comparable. However, seven of eight control participants preferred PCA as their method of virtual wheelchair control. The unsupervised PCA algorithm was easier to train and seemed sufficient to achieve a higher degree of learnability and perceived ease of use. PMID:28092564

  19. A fast and accurate online sequential learning algorithm for feedforward networks.

    PubMed

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  20. Using electronic medical records to increase the efficiency of catheter-associated urinary tract infection surveillance for National Health and Safety Network reporting.

    PubMed

    Shepard, John; Hadhazy, Eric; Frederick, John; Nicol, Spencer; Gade, Padmaja; Cardon, Andrew; Wilson, Jorge; Vetteth, Yohan; Madison, Sasha

    2014-03-01

    Streamlining health care-associated infection surveillance is essential for health care facilities owing to the continuing increases in reporting requirements. Stanford Hospital, a 583-bed adult tertiary care center, used their electronic medical record (EMR) to develop an electronic algorithm to reduce the time required to conduct catheter-associated urinary tract infection (CAUTI) surveillance in adults. The algorithm provides inclusion and exclusion criteria, using the National Healthcare Safety Network definitions, for patients with a CAUTI. The algorithm was validated by trained infection preventionists through complete chart review for a random sample of cultures collected during the study period, September 1, 2012, to February 28, 2013. During the study period, a total of 6,379 positive urine cultures were identified. The Stanford Hospital electronic CAUTI algorithm identified 6,101 of these positive cultures (95.64%) as not a CAUTI, 191 (2.99%) as a possible CAUTI requiring further validation, and 87 (1.36%) as a definite CAUTI. Overall, use of the algorithm reduced CAUTI surveillance requirements at Stanford Hospital by 97.01%. The electronic algorithm proved effective in increasing the efficiency of CAUTI surveillance. The data suggest that CAUTI surveillance using the National Healthcare Safety Network definitions can be fully automated. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. All rights reserved.

Top