Sample records for incomplete factorization algorithms

  1. Task Parallel Incomplete Cholesky Factorization using 2D Partitioned-Block Layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Rajamanickam, Sivasankaran; Stelle, George Widgery

    We introduce a task-parallel algorithm for sparse incomplete Cholesky factorization that utilizes a 2D sparse partitioned-block layout of a matrix. Our factorization algorithm follows the idea of algorithms-by-blocks by using the block layout. The algorithm-byblocks approach induces a task graph for the factorization. These tasks are inter-related to each other through their data dependences in the factorization algorithm. To process the tasks on various manycore architectures in a portable manner, we also present a portable tasking API that incorporates different tasking backends and device-specific features using an open-source framework for manycore platforms i.e., Kokkos. A performance evaluation is presented onmore » both Intel Sandybridge and Xeon Phi platforms for matrices from the University of Florida sparse matrix collection to illustrate merits of the proposed task-based factorization. Experimental results demonstrate that our task-parallel implementation delivers about 26.6x speedup (geometric mean) over single-threaded incomplete Choleskyby- blocks and 19.2x speedup over serial Cholesky performance which does not carry tasking overhead using 56 threads on the Intel Xeon Phi processor for sparse matrices arising from various application problems.« less

  2. An incomplete assembly with thresholding algorithm for systems of reaction-diffusion equations in three space dimensions IAT for reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2003-07-01

    Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied.

  3. Incomplete Sparse Approximate Inverses for Parallel Preconditioning

    DOE PAGES

    Anzt, Hartwig; Huckle, Thomas K.; Bräckle, Jürgen; ...

    2017-10-28

    In this study, we propose a new preconditioning method that can be seen as a generalization of block-Jacobi methods, or as a simplification of the sparse approximate inverse (SAI) preconditioners. The “Incomplete Sparse Approximate Inverses” (ISAI) is in particular efficient in the solution of sparse triangular linear systems of equations. Those arise, for example, in the context of incomplete factorization preconditioning. ISAI preconditioners can be generated via an algorithm providing fine-grained parallelism, which makes them attractive for hardware with a high concurrency level. Finally, in a study covering a large number of matrices, we identify the ISAI preconditioner as anmore » attractive alternative to exact triangular solves in the context of incomplete factorization preconditioning.« less

  4. Phase Diversity and Polarization Augmented Techniques for Active Imaging

    DTIC Science & Technology

    2007-03-01

    build up a system model for use in algorithm development. 32 IV. Conventional Imaging and Atmospheric Turbulence With an understanding of scalar...28, 59, 115 Cholesky Factorization, 14, 42 C2n, see Turbulence Coherent Image Model, 36 Complete Data, see EM Algorithm Complex Coherence...Data, see EM Algorithm Homotopic, 62 Impulse Response, 34, 44 Incoherent Image Model, 36 Incomplete Data, see EM Algorithm Lo- Turbulence Outer Scale

  5. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Clift, Simon S.; Tang, Wei-Pai

    1994-01-01

    We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

  6. General relaxation schemes in multigrid algorithms for higher order singularity methods

    NASA Technical Reports Server (NTRS)

    Oskam, B.; Fray, J. M. J.

    1981-01-01

    Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.

  7. Development and validation of an algorithm to complete colonoscopy using standard endoscopes in patients with prior incomplete colonoscopy

    PubMed Central

    Rogers, Melinda C.; Gawron, Andrew; Grande, David; Keswani, Rajesh N.

    2017-01-01

    Background and study aims  Incomplete colonoscopy may occur as a result of colon angulation (adhesions or diverticulosis), endoscope looping, or both. Specialty endoscopes/devices have been shown to successfully complete prior incomplete colonoscopies, but may not be widely available. Radiographic or other image-based evaluations have been shown to be effective but may miss small or flat lesions, and colonoscopy is often still indicated if a large lesion is identified. The purpose of this study was to develop and validate an algorithm to determine the optimum endoscope to ensure completion of the examination in patients with prior incomplete colonoscopy. Patients and methods  This was a prospective cohort study of 175 patients with prior incomplete colonoscopy who were referred to a single endoscopist at a single academic medical center over a 3-year period from 2012 through 2015. Colonoscopy outcomes from the initial 50 patients were used to develop an algorithm to determine the optimal standard endoscope and technique to achieve cecal intubation. The algorithm was validated on the subsequent 125 patients. Results  The overall repeat colonoscopy success rate using a standard endoscope was 94 %. The initial standard endoscope specified by the algorithm was used and completed the colonoscopy in 90 % of patients. Conclusions  This study identifies an effective strategy for completing colonoscopy in patients with prior incomplete examination, using widely available standard endoscopes and an algorithm based on patient characteristics and reasons for prior incomplete colonoscopy. PMID:28924595

  8. Unsymmetric ordering using a constrained Markowitz scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amestoy, Patrick R.; Xiaoye S.; Pralet, Stephane

    2005-01-18

    We present a family of ordering algorithms that can be used as a preprocessing step prior to performing sparse LU factorization. The ordering algorithms simultaneously achieve the objectives of selecting numerically good pivots and preserving the sparsity. We describe the algorithmic properties and challenges in their implementation. By mixing the two objectives we show that we can reduce the amount of fill-in in the factors and reduce the number of numerical problems during factorization. On a set of large unsymmetric real problems, we obtained the median reductions of 12% in the factorization time, of 13% in the size of themore » LU factors, of 20% in the number of operations performed during the factorization phase, and of 11% in the memory needed by the multifrontal solver MA41-UNS. A byproduct of this ordering strategy is an incomplete LU-factored matrix that can be used as a preconditioner in an iterative solver.« less

  9. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  10. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  11. Radiation detector device for rejecting and excluding incomplete charge collection events

    DOEpatents

    Bolotnikov, Aleksey E.; De Geronimo, Gianluigi; Vernon, Emerson; Yang, Ge; Camarda, Giuseppe; Cui, Yonggang; Hossain, Anwar; Kim, Ki Hyun; James, Ralph B.

    2016-05-10

    A radiation detector device is provided that is capable of distinguishing between full charge collection (FCC) events and incomplete charge collection (ICC) events based upon a correlation value comparison algorithm that compares correlation values calculated for individually sensed radiation detection events with a calibrated FCC event correlation function. The calibrated FCC event correlation function serves as a reference curve utilized by a correlation value comparison algorithm to determine whether a sensed radiation detection event fits the profile of the FCC event correlation function within the noise tolerances of the radiation detector device. If the radiation detection event is determined to be an ICC event, then the spectrum for the ICC event is rejected and excluded from inclusion in the radiation detector device spectral analyses. The radiation detector device also can calculate a performance factor to determine the efficacy of distinguishing between FCC and ICC events.

  12. N-terminal pro-B-type natriuretic peptide diagnostic algorithm versus American Heart Association algorithm for Kawasaki disease.

    PubMed

    Dionne, Audrey; Meloche-Dumas, Léamarie; Desjardins, Laurent; Turgeon, Jean; Saint-Cyr, Claire; Autmizguine, Julie; Spigelblatt, Linda; Fournier, Anne; Dahdah, Nagib

    2017-03-01

    Diagnosis of Kawasaki disease (KD) can be challenging in the absence of a confirmatory test or pathognomonic finding, especially when clinical criteria are incomplete. We recently proposed serum N-terminal pro-B-type natriuretic peptide (NT-proBNP) as an adjunctive diagnostic test. We retrospectively tested a new algorithm to help KD diagnosis based on NT-proBNP, coronary artery dilation (CAD) at onset, and abnormal serum albumin or C-reactive protein (CRP). The goal was to assess the performance of the algorithm and compare its performance with that of the 2004 American Heart Association (AHA)/American Academy of Pediatrics (AAP) algorithm. The algorithm was tested on 124 KD patients with NT-proBNP measured on admission at the present institutions between 2007 and 2013. Age at diagnosis was 3.4 ± 3.0 years, with a median of five diagnostic criteria; and 55 of the 124 patients (44%) had incomplete KD. CA complications occurred in 64 (52%), with aneurysm in 14 (11%). Using this algorithm, 120/124 (97%) were to be treated, based on high NT-proBNP alone for 79 (64%); on onset CAD for 14 (11%); and on high CRP or low albumin for 27 (22%). Using the AHA/AAP algorithm, 22/47 (47%) of the eligible patients with incomplete KD would not have been referred for treatment, compared with 3/55 (5%) with the NT-proBNP algorithm (P < 0.001). This NT-proBNP-based algorithm is efficient to identify and treat patients with KD, including those with incomplete KD. This study paves the way for a prospective validation trial of the algorithm. © 2016 Japan Pediatric Society.

  13. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  14. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  15. Preconditioned conjugate gradient methods for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.

    1990-01-01

    The compressible Navier-Stokes equations are solved for a variety of two-dimensional inviscid and viscous problems by preconditioned conjugate gradient-like algorithms. Roe's flux difference splitting technique is used to discretize the inviscid fluxes. The viscous terms are discretized by using central differences. An algebraic turbulence model is also incorporated. The system of linear equations which arises out of the linearization of a fully implicit scheme is solved iteratively by the well known methods of GMRES (Generalized Minimum Residual technique) and Chebyschev iteration. Incomplete LU factorization and block diagonal factorization are used as preconditioners. The resulting algorithm is competitive with the best current schemes, but has wide applications in parallel computing and unstructured mesh computations.

  16. Inferring duplications, losses, transfers and incomplete lineage sorting with nonbinary species trees.

    PubMed

    Stolzer, Maureen; Lai, Han; Xu, Minli; Sathaye, Deepa; Vernot, Benjamin; Durand, Dannie

    2012-09-15

    Gene duplication (D), transfer (T), loss (L) and incomplete lineage sorting (I) are crucial to the evolution of gene families and the emergence of novel functions. The history of these events can be inferred via comparison of gene and species trees, a process called reconciliation, yet current reconciliation algorithms model only a subset of these evolutionary processes. We present an algorithm to reconcile a binary gene tree with a nonbinary species tree under a DTLI parsimony criterion. This is the first reconciliation algorithm to capture all four evolutionary processes driving tree incongruence and the first to reconcile non-binary species trees with a transfer model. Our algorithm infers all optimal solutions and reports complete, temporally feasible event histories, giving the gene and species lineages in which each event occurred. It is fixed-parameter tractable, with polytime complexity when the maximum species outdegree is fixed. Application of our algorithms to prokaryotic and eukaryotic data show that use of an incomplete event model has substantial impact on the events inferred and resulting biological conclusions. Our algorithms have been implemented in Notung, a freely available phylogenetic reconciliation software package, available at http://www.cs.cmu.edu/~durand/Notung. mstolzer@andrew.cmu.edu.

  17. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  18. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  19. Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoke; Sun, Penggang; Wang, Yu

    2018-04-01

    Many networks derived from society and nature are temporal and incomplete. The temporal link prediction problem in networks is to predict links at time T + 1 based on a given temporal network from time 1 to T, which is essential to important applications. The current algorithms either predict the temporal links by collapsing the dynamic networks or collapsing features derived from each network, which are criticized for ignoring the connection among slices. to overcome the issue, we propose a novel graph regularized nonnegative matrix factorization algorithm (GrNMF) for the temporal link prediction problem without collapsing the dynamic networks. To obtain the feature for each network from 1 to t, GrNMF factorizes the matrix associated with networks by setting the rest networks as regularization, which provides a better way to characterize the topological information of temporal links. Then, the GrNMF algorithm collapses the feature matrices to predict temporal links. Compared with state-of-the-art methods, the proposed algorithm exhibits significantly improved accuracy by avoiding the collapse of temporal networks. Experimental results of a number of artificial and real temporal networks illustrate that the proposed method is not only more accurate but also more robust than state-of-the-art approaches.

  20. Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

    DTIC Science & Technology

    2015-07-01

    Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data Guy Van den Broeck∗ and Karthika Mohan∗ and Arthur Choi and Adnan ...notwithstanding any other provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does...Wasserman, L. (2011). All of Statistics. Springer Science & Business Media. Yaramakala, S., & Margaritis, D. (2005). Speculative markov blanket discovery for optimal feature selection. In Proceedings of ICDM.

  1. An evidence-based treatment algorithm for colorectal polyp cancers: results from the Scottish Screen-detected Polyp Cancer Study (SSPoCS).

    PubMed

    Richards, C H; Ventham, N T; Mansouri, D; Wilson, M; Ramsay, G; Mackay, C D; Parnaby, C N; Smith, D; On, J; Speake, D; McFarlane, G; Neo, Y N; Aitken, E; Forrest, C; Knight, K; McKay, A; Nair, H; Mulholland, C; Robertson, J H; Carey, F A; Steele, Rjc

    2018-02-01

    Colorectal polyp cancers present clinicians with a treatment dilemma. Decisions regarding whether to offer segmental resection or endoscopic surveillance are often taken without reference to good quality evidence. The aim of this study was to develop a treatment algorithm for patients with screen-detected polyp cancers. This national cohort study included all patients with a polyp cancer identified through the Scottish Bowel Screening Programme between 2000 and 2012. Multivariate regression analysis was used to assess the impact of clinical, endoscopic and pathological variables on the rate of adverse events (residual tumour in patients undergoing segmental resection or cancer-related death or disease recurrence in any patient). These data were used to develop a clinically relevant treatment algorithm. 485 patients with polyp cancers were included. 186/485 (38%) underwent segmental resection and residual tumour was identified in 41/186 (22%). The only factor associated with an increased risk of residual tumour in the bowel wall was incomplete excision of the original polyp (OR 5.61, p=0.001), while only lymphovascular invasion was associated with an increased risk of lymph node metastases (OR 5.95, p=0.002). When patients undergoing segmental resection or endoscopic surveillance were considered together, the risk of adverse events was significantly higher in patients with incomplete excision (OR 10.23, p<0.001) or lymphovascular invasion (OR 2.65, p=0.023). A policy of surveillance is adequate for the majority of patients with screen-detected colorectal polyp cancers. Consideration of segmental resection should be reserved for those with incomplete excision or evidence of lymphovascular invasion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  3. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  4. Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

    NASA Astrophysics Data System (ADS)

    Lee, Kyunghoon

    To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)

  5. Classification and data acquisition with incomplete data

    NASA Astrophysics Data System (ADS)

    Williams, David P.

    In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the closely related problem of active data acquisition, which develops a strategy to acquire missing features and labels that will most benefit the classification task. We first address the general problem of classification with incomplete data, maintaining the view that all data (i.e., information) is valuable. We employ a logistic regression framework within which we formulate a supervised classification algorithm for incomplete data. This principled, yet flexible, framework permits several interesting extensions that allow all available data to be utilized. One extension incorporates labeling error, which permits the usage of potentially imperfectly labeled data in learning a classifier. A second major extension converts the proposed algorithm to a semi-supervised approach by utilizing unlabeled data via graph-based regularization. Finally, the classification algorithm is extended to the case in which (image) data---from which features are extracted---are available from multiple resolutions. Taken together, this family of incomplete-data classification algorithms exploits all available data in a principled manner by avoiding explicit imputation. Instead, missing data is integrated out analytically with the aid of an estimated conditional density function (conditioned on the observed features). This feat is accomplished by invoking only mild assumptions. We also address the problem of active data acquisition by determining which missing data should be acquired to most improve performance. Specifically, we examine this data acquisition task when the data to be acquired can be either labels or features. The proposed approach is based on a criterion that accounts for the expected benefit of the acquisition. This approach, which is applicable for any general missing data problem, exploits the incomplete-data classification framework introduced in the first part of this dissertation. This data acquisition approach allows for the acquisition of both labels and features. Moreover, several types of feature acquisition are permitted, including the acquisition of individual or multiple features for individual or multiple data points, which may be either labeled or unlabeled. Furthermore, if different types of data acquisition are feasible for a given application, the algorithm will automatically determine the most beneficial type of data to acquire. Experimental results on both benchmark machine learning data sets and real (i.e., measured) remote-sensing data demonstrate the advantages of the proposed incomplete-data classification and active data acquisition algorithms.

  6. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets

    PubMed Central

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662

  7. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    PubMed

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  8. Image-processing algorithms for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Incompletely closed glumes, germ and disease are three characteristics of hybrid rice seed. Image-processing algorithms developed to detect these seed characteristics were presented in this paper. The rice seed used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and IIyou. The algorithms were implemented with a 5*600 images set, a 4*400 images set and the other 5*600 images set respectively. The image sets included black background images, white background images and both sides images of rice seed. Results show that the algorithm for inspecting seeds with incompletely closed glumes based on Radon Transform achieved an accuracy of 96% for normal seeds, 92% for seeds with fine fissure and 87% for seeds with unclosed glumes, the algorithm for inspecting germinated seeds on panicle based on PCA and ANN achieved n average accuracy of 98% for normal seeds, 88% for germinated seeds on panicle and the algorithm for inspecting diseased seeds based on color features achieved an accuracy of 92% for normal and healthy seeds, 95% for spot diseased seeds and 83% for severe diseased seeds.

  9. The U.S. Geological Survey Modular Ground-Water Model - PCGN: A Preconditioned Conjugate Gradient Solver with Improved Nonlinear Control

    USGS Publications Warehouse

    Naff, Richard L.; Banta, Edward R.

    2008-01-01

    The preconditioned conjugate gradient with improved nonlinear control (PCGN) package provides addi-tional means by which the solution of nonlinear ground-water flow problems can be controlled as compared to existing solver packages for MODFLOW. Picard iteration is used to solve nonlinear ground-water flow equations by iteratively solving a linear approximation of the nonlinear equations. The linear solution is provided by means of the preconditioned conjugate gradient algorithm where preconditioning is provided by the modi-fied incomplete Cholesky algorithm. The incomplete Cholesky scheme incorporates two levels of fill, 0 and 1, in which the pivots can be modified so that the row sums of the preconditioning matrix and the original matrix are approximately equal. A relaxation factor is used to implement the modified pivots, which determines the degree of modification allowed. The effects of fill level and degree of pivot modification are briefly explored by means of a synthetic, heterogeneous finite-difference matrix; results are reported in the final section of this report. The preconditioned conjugate gradient method is coupled with Picard iteration so as to efficiently solve the nonlinear equations associated with many ground-water flow problems. The description of this coupling of the linear solver with Picard iteration is a primary concern of this document.

  10. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  11. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  12. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  13. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  14. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  15. Invariant protection of high-voltage electric motors of technological complexes at industrial enterprises at partial single-phase ground faults

    NASA Astrophysics Data System (ADS)

    Abramovich, B. N.; Sychev, Yu A.; Pelenev, D. N.

    2018-03-01

    Development results of invariant protection of high-voltage motors at incomplete single-phase ground faults are observed in the article. It is established that current protections have low action selectivity because of an inadmissible decrease in entrance signals during the shirt circuit occurrence in the place of transient resistance. The structural functional scheme and an algorithm of protective actions where correction of automatic zero sequence currents signals of the protected accessions implemented according to the level of incompleteness of ground faults are developed. It is revealed that automatic correction of zero sequence currents allows one to provide the invariance of sensitivity factor for protection under the variation conditions of a transient resistance in the place of damage. Application of invariant protection allows one to minimize damages in 6-10 kV electrical installations of industrial enterprises for a cause of infringement of consumers’ power supply and their system breakdown due to timely localization of emergency of ground faults modes.

  16. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  17. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  18. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  19. Formulations and algorithms for problems on rock mass and support deformation during mining

    NASA Astrophysics Data System (ADS)

    Seryakov, VM

    2018-03-01

    The analysis of problem formulations to calculate stress-strain state of mine support and surrounding rocks mass in rock mechanics shows that such formulations incompletely describe the mechanical features of joint deformation in the rock mass–support system. The present paper proposes an algorithm to take into account the actual conditions of rock mass and support interaction and the algorithm implementation method to ensure efficient calculation of stresses in rocks and support.

  20. An algorithm-based topographical biomaterials library to instruct cell fate

    PubMed Central

    Unadkat, Hemant V.; Hulsman, Marc; Cornelissen, Kamiel; Papenburg, Bernke J.; Truckenmüller, Roman K.; Carpenter, Anne E.; Wessling, Matthias; Post, Gerhard F.; Uetz, Marc; Reinders, Marcel J. T.; Stamatialis, Dimitrios; van Blitterswijk, Clemens A.; de Boer, Jan

    2011-01-01

    It is increasingly recognized that material surface topography is able to evoke specific cellular responses, endowing materials with instructive properties that were formerly reserved for growth factors. This opens the window to improve upon, in a cost-effective manner, biological performance of any surface used in the human body. Unfortunately, the interplay between surface topographies and cell behavior is complex and still incompletely understood. Rational approaches to search for bioactive surfaces will therefore omit previously unperceived interactions. Hence, in the present study, we use mathematical algorithms to design nonbiased, random surface features and produce chips of poly(lactic acid) with 2,176 different topographies. With human mesenchymal stromal cells (hMSCs) grown on the chips and using high-content imaging, we reveal unique, formerly unknown, surface topographies that are able to induce MSC proliferation or osteogenic differentiation. Moreover, we correlate parameters of the mathematical algorithms to cellular responses, which yield novel design criteria for these particular parameters. In conclusion, we demonstrate that randomized libraries of surface topographies can be broadly applied to unravel the interplay between cells and surface topography and to find improved material surfaces. PMID:21949368

  1. Interferometric tomography of continuous fields with incomplete projections

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Sun, Hogwei

    1988-01-01

    Interferometric tomography in the presence of an opaque object is investigated. The developed iterative algorithm does not need to augment the missing information. It is based on the successive reconstruction of the difference field, the difference between the object field to be reconstructed and its estimate, only in the difined region. The application of the algorithm results in stable convergence.

  2. A Scalable Distribution Network Risk Evaluation Framework via Symbolic Dynamics

    PubMed Central

    Yuan, Kai; Liu, Jian; Liu, Kaipei; Tan, Tianyuan

    2015-01-01

    Background Evaluations of electric power distribution network risks must address the problems of incomplete information and changing dynamics. A risk evaluation framework should be adaptable to a specific situation and an evolving understanding of risk. Methods This study investigates the use of symbolic dynamics to abstract raw data. After introducing symbolic dynamics operators, Kolmogorov-Sinai entropy and Kullback-Leibler relative entropy are used to quantitatively evaluate relationships between risk sub-factors and main factors. For layered risk indicators, where the factors are categorized into four main factors – device, structure, load and special operation – a merging algorithm using operators to calculate the risk factors is discussed. Finally, an example from the Sanya Power Company is given to demonstrate the feasibility of the proposed method. Conclusion Distribution networks are exposed and can be affected by many things. The topology and the operating mode of a distribution network are dynamic, so the faults and their consequences are probabilistic. PMID:25789859

  3. Fast animation of lightning using an adaptive mesh.

    PubMed

    Kim, Theodore; Lin, Ming C

    2007-01-01

    We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.

  4. Bell's palsy. A prospective, longitudinal, descriptive, and observational analysis of prognosis factors for recovery in Mexican patients.

    PubMed

    Sánchez-Chapul, Laura; Reyes-Cadena, Susana; Andrade-Cabrera, José Luis; Carrillo-Soto, Irma A; León-Hernández, Saúl R; Paniagua-Pérez, Rogelio; Olivera-Díaz, Hiram; Baños-Mendoza, Teresa; Flores-Mondragón, Gabriela; Hernández-Campos, Norma A

    2011-01-01

    To determine the prognosis factors in Mexican patients with Bell's palsy. We designed a prospective, longitudinal, descriptive, and observational analysis. Two hundred and fifty one patients diagnosed with Bell's palsy at the National Institute of Rehabilitation were included. We studied the sociodemographic characteristics, seasonal occurrence, sidedness, symptoms, and therapeutic options to determine the prognostic factors for their recovery. Thirty-nine percent of patients had a complete recovery and 41.5% had an incomplete recovery. Marital status, gender, etiology, symptoms, sidedness, House-Brackmann grade, and treatments did not represent significant prognostic factors for recovery. Age > 40 years (OR = 2.4, IC 95% 1.3-4.3, p = 0.002) and lack of physical therapy (OR = 6.4, IC 95% 1.4-29.6, p = 0.006) were significant prognostic factors for incomplete recovery. Familial palsy resulted to be a protective prognostic factor against an incomplete recovery (OR = 0.54, IC 95% 0.28-1.01, p = 0.039). This protection factor was only significant in female patients (OR = 0.41, p = 0.22) but not in male patients (OR = 1.0, p = 0.61). The proportion of cases with incomplete recovery was high. The age > 40 years and lack of physical therapy were the only significant prognostic factors for an incomplete recovery.

  5. Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data

    NASA Astrophysics Data System (ADS)

    Luo, Zhen

    In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.

  6. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    PubMed

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.

  7. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  8. Link prediction boosted psychiatry disorder classification for functional connectivity network

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  9. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    PubMed

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  11. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  12. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    NASA Astrophysics Data System (ADS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  13. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  14. Graph Embedding Techniques for Bounding Condition Numbers of Incomplete Factor Preconditioning

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen

    1997-01-01

    We extend graph embedding techniques for bounding the spectral condition number of preconditioned systems involving symmetric, irreducibly diagonally dominant M-matrices to systems where the preconditioner is not diagonally dominant. In particular, this allows us to bound the spectral condition number when the preconditioner is based on an incomplete factorization. We provide a review of previous techniques, describe our extension, and give examples both of a bound for a model problem, and of ways in which our techniques give intuitive way of looking at incomplete factor preconditioners.

  15. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    PubMed

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  16. Case finding with incomplete administrative data: observations on playing with less than a full deck.

    PubMed

    Holmes, Ann M; Ackermann, Ronald T; Katz, Barry P; Downs, Stephen M; Inui, Thomas S

    2010-12-01

    Capacity constraints and efficiency considerations require that disease management programs identify patients most likely to benefit from intervention. Predictive modeling with available administrative data has been used as a strategy to match patients with appropriate interventions. Administrative data, however, can be plagued by problems of incompleteness and delays in processing. In this article, we examine the effects of these problems on the effectiveness of using administrative data to identify suitable candidates for disease management, and we evaluate various proposed solutions. We build prospective models using regression analysis and evaluate the resulting stratification algorithms using R² statistics, areas under receiver operator characteristic curves, and cost concentration ratios. We find delays in receipt of data reduce the effectiveness of the stratification algorithm, but the degree of compromise depends on what proportion of the population is targeted for intervention. Surprisingly, we find that supplementing partial data with a longer panel of more outdated data produces algorithms that are inferior to algorithms based on a shorter window of more recent data. Demographic data add little to algorithms that include prior claims data, and are an inadequate substitute when claims data are unavailable. Supplementing demographic data with additional information on self-reported health status improves the stratification performance only slightly and only when disease management is targeted to the highest risk patients. We conclude that the extra costs associated with surveying patients for health status information or retrieving older claims data cannot be justified given the lack of evidence that either improves the effectiveness of the stratification algorithm.

  17. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.

    PubMed

    Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong

    2017-10-01

    There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Involvement of Receptor Activator of Nuclear Factor-κB Ligand (RANKL)-induced Incomplete Cytokinesis in the Polyploidization of Osteoclasts.

    PubMed

    Takegahara, Noriko; Kim, Hyunsoo; Mizuno, Hiroki; Sakaue-Sawano, Asako; Miyawaki, Atsushi; Tomura, Michio; Kanagawa, Osami; Ishii, Masaru; Choi, Yongwon

    2016-02-12

    Osteoclasts are specialized polyploid cells that resorb bone. Upon stimulation with receptor activator of nuclear factor-κB ligand (RANKL), myeloid precursors commit to becoming polyploid, largely via cell fusion. Polyploidization of osteoclasts is necessary for their bone-resorbing activity, but the mechanisms by which polyploidization is controlled remain to be determined. Here, we demonstrated that in addition to cell fusion, incomplete cytokinesis also plays a role in osteoclast polyploidization. In in vitro cultured osteoclasts derived from mice expressing the fluorescent ubiquitin-based cell cycle indicator (Fucci), RANKL induced polyploidy by incomplete cytokinesis as well as cell fusion. Polyploid cells generated by incomplete cytokinesis had the potential to subsequently undergo cell fusion. Nuclear polyploidy was also observed in osteoclasts in vivo, suggesting the involvement of incomplete cytokinesis in physiological polyploidization. Furthermore, RANKL-induced incomplete cytokinesis was reduced by inhibition of Akt, resulting in impaired multinucleated osteoclast formation. Taken together, these results reveal that RANKL-induced incomplete cytokinesis contributes to polyploidization of osteoclasts via Akt activation. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  19. Involvement of Receptor Activator of Nuclear Factor-κB Ligand (RANKL)-induced Incomplete Cytokinesis in the Polyploidization of Osteoclasts*

    PubMed Central

    Takegahara, Noriko; Kim, Hyunsoo; Mizuno, Hiroki; Sakaue-Sawano, Asako; Miyawaki, Atsushi; Tomura, Michio; Kanagawa, Osami; Ishii, Masaru; Choi, Yongwon

    2016-01-01

    Osteoclasts are specialized polyploid cells that resorb bone. Upon stimulation with receptor activator of nuclear factor-κB ligand (RANKL), myeloid precursors commit to becoming polyploid, largely via cell fusion. Polyploidization of osteoclasts is necessary for their bone-resorbing activity, but the mechanisms by which polyploidization is controlled remain to be determined. Here, we demonstrated that in addition to cell fusion, incomplete cytokinesis also plays a role in osteoclast polyploidization. In in vitro cultured osteoclasts derived from mice expressing the fluorescent ubiquitin-based cell cycle indicator (Fucci), RANKL induced polyploidy by incomplete cytokinesis as well as cell fusion. Polyploid cells generated by incomplete cytokinesis had the potential to subsequently undergo cell fusion. Nuclear polyploidy was also observed in osteoclasts in vivo, suggesting the involvement of incomplete cytokinesis in physiological polyploidization. Furthermore, RANKL-induced incomplete cytokinesis was reduced by inhibition of Akt, resulting in impaired multinucleated osteoclast formation. Taken together, these results reveal that RANKL-induced incomplete cytokinesis contributes to polyploidization of osteoclasts via Akt activation. PMID:26670608

  20. Pulmonary Lobe Segmentation with Probabilistic Segmentation of the Fissures and a Groupwise Fissure Prior

    PubMed Central

    Bragman, Felix J.S.; McClelland, Jamie R.; Jacob, Joseph; Hurst, John R.; Hawkes, David J.

    2017-01-01

    A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0.90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0.884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures. PMID:28436850

  1. Dental cone-beam CT reconstruction from limited-angle view data based on compressed-sensing (CS) theory for fast, low-dose X-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, Uikyu; Cho, Hyosung; Lee, Minsik; Oh, Jieun; Park, Yeonok; Hong, Daeki; Park, Cheulkyu; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2014-06-01

    Recently, reducing radiation doses has become an issue of critical importance in the broader radiological community. As a possible technical approach, especially, in dental cone-beam computed tomography (CBCT), reconstruction from limited-angle view data (< 360°) would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction algorithm based on compressed-sensing (CS) theory for the scan geometry and performed systematic simulation works to investigate the image characteristics. We also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in incomplete data problems. We successfully reconstructed CBCT images with incomplete projections acquired at selected scan angles of 120, 150, 180, and 200° with a fixed angle step of 1.2° and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from limited-angle view data show that the algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.

  2. Fast angular synchronization for phase retrieval via incomplete information

    NASA Astrophysics Data System (ADS)

    Viswanathan, Aditya; Iwen, Mark

    2015-08-01

    We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.

  3. Analytical procedures for estimating structural response to acoustic fields generated by advanced launch systems, phase 2

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Lin, Y. K.; Zhu, Li-Ping; Fang, Jian-Jie; Cai, G. Q.

    1994-01-01

    This report supplements a previous report of the same title submitted in June, 1992. It summarizes additional analytical techniques which have been developed for predicting the response of linear and nonlinear structures to noise excitations generated by large propulsion power plants. The report is divided into nine chapters. The first two deal with incomplete knowledge of boundary conditions of engineering structures. The incomplete knowledge is characterized by a convex set, and its diagnosis is formulated as a multi-hypothesis discrete decision-making algorithm with attendant criteria of adaptive termination.

  4. Dynamic pattern matcher using incomplete data

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G. (Inventor); Wang, Lui (Inventor)

    1993-01-01

    This invention relates generally to pattern matching systems, and more particularly to a method for dynamically adapting the system to enhance the effectiveness of a pattern match. Apparatus and methods for calculating the similarity between patterns are known. There is considerable interest, however, in the storage and retrieval of data, particularly, when the search is called or initiated by incomplete information. For many search algorithms, a query initiating a data search requires exact information, and the data file is searched for an exact match. Inability to find an exact match thus results in a failure of the system or method.

  5. Predicting missing links and identifying spurious links via likelihood analysis

    NASA Astrophysics Data System (ADS)

    Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun

    2016-03-01

    Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.

  6. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  7. Predicting missing links and identifying spurious links via likelihood analysis

    PubMed Central

    Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun

    2016-01-01

    Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965

  8. Primary hip and knee replacement surgery: Ontario criteria for case selection and surgical priority.

    PubMed Central

    Naylor, C D; Williams, J I

    1996-01-01

    OBJECTIVES--To develop, from simple clinical factors, criteria to identify appropriate patients for referral to a surgeon for consideration for arthroplasty, and to rank them in the queue once surgery is agreed. DESIGN--Delphi process, with a panel including orthopaedic surgeons, rheumatologists, general practitioners, epidemiologists, and physiotherapists, who rated 120 case scenarios for appropriateness and 42 for waiting list priority. Scenarios incorporated combinations of relevant clinical factors. It was assumed that queues should be organised not simply by chronology but by clinical and social impact of delayed surgery. The panel focused on information obtained from clinical histories, to ensure the utility of the guidelines in practice. Relevant high quality research evidence was limited. SETTING--Ontario, Canada. MAIN MEASURES--Appropriateness ratings on a 7-point scale, and urgency rankings on a 4-point scale keyed to specific waiting times. RESULTS--Despite incomplete evidence panellists agreed on ratings in 92.5% of appropriateness and 73.8% of urgency scenarios versus 15% and 18% agreement expected by chance, respectively. Statistically validated algorithms in decision tree form, which should permit rapid estimation of urgency or appropriateness in practice, were compiled by recursive partitioning. Rating patterns and algorithms were also used to make brief written guidelines on how clinical factors affect appropriateness and urgency of surgery. A summary score was provided for each case scenario; scenarios could then be matched to chart audit results, with scoring for quality management. CONCLUSIONS--These algorithms and criteria can be used by managers or practitioners to assess appropriateness of referral for hip or knee replacement and relative rankings of patients in the queue for surgery. PMID:10157268

  9. Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, J.; Levin, S.; Anderson, C. H.

    2004-01-01

    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).

  10. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  11. Group prioritisation with unknown expert weights in incomplete linguistic context

    NASA Astrophysics Data System (ADS)

    Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan

    2017-09-01

    In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.

  12. Reducing Unnecessary Accumulation of Incomplete Grades: A Quality Improvement Project

    ERIC Educational Resources Information Center

    Domocmat, Maria Carmela L.

    2015-01-01

    It has been noted that there is an increasing percentage of students accumulating incomplete (INC) grades. This paper aims to identify the factors that contribute to the accumulation of incomplete grades of students and, utilizing the best practices of various universities worldwide, it intends to recommend solutions in limiting the number of…

  13. Comprehensive Angular Response Study of LLNL Panasonic Dosimeter Configurations and Artificial Intelligence Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, D. K.

    In April of 2016, the Lawrence Livermore National Laboratory External Dosimetry Program underwent a Department of Energy Laboratory Accreditation Program (DOELAP) on-site assessment. The assessment reported a concern that the study performed in 2013 Angular Dependence Study Panasonic UD-802 and UD-810 Dosimeters LLNL Artificial Intelligence Algorithm was incomplete. Only the responses at ±60° and 0° were evaluated and independent data from dosimeters was not used to evaluate the algorithm. Additionally, other configurations of LLNL dosimeters were not considered in this study. This includes nuclear accident dosimeters (NAD) which are placed in the wells surrounding the TLD in the dosimeter holder.

  14. State estimation with incomplete nonlinear constraint

    NASA Astrophysics Data System (ADS)

    Huang, Yuan; Wang, Xueying; An, Wei

    2017-10-01

    A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.

  15. TOPSIS-based consensus model for group decision-making with incomplete interval fuzzy preference relations.

    PubMed

    Liu, Fang; Zhang, Wei-Guo

    2014-08-01

    Due to the vagueness of real-world environments and the subjective nature of human judgments, it is natural for experts to estimate their judgements by using incomplete interval fuzzy preference relations. In this paper, based on the technique for order preference by similarity to ideal solution method, we present a consensus model for group decision-making (GDM) with incomplete interval fuzzy preference relations. To do this, we first define a new consistency measure for incomplete interval fuzzy preference relations. Second, a goal programming model is proposed to estimate the missing interval preference values and it is guided by the consistency property. Third, an ideal interval fuzzy preference relation is constructed by using the induced ordered weighted averaging operator, where the associated weights of characterizing the operator are based on the defined consistency measure. Fourth, a similarity degree between complete interval fuzzy preference relations and the ideal one is defined. The similarity degree is related to the associated weights, and used to aggregate the experts' preference relations in such a way that more importance is given to ones with the higher similarity degree. Finally, a new algorithm is given to solve the GDM problem with incomplete interval fuzzy preference relations, which is further applied to partnership selection in formation of virtual enterprises.

  16. Factors Affecting Formation of Incomplete Vi Antibody in Mice

    PubMed Central

    Gaines, Sidney; Currie, Julius A.; Tully, Joseph G.

    1965-01-01

    Gaines, Sidney (Walter Reed Army Institute of Research, Washington, D.C.), Julius A. Currie, and Joseph G. Tully. Factors affecting formation of incomplete Vi antibody in mice. J. Bacteriol. 90:635–642. 1965.—Single immunizing doses of purified Vi antigen elicited complete and incomplete Vi antibodies in BALB/c mice, but only incomplete antibody in Cinnamon mice. Three of six other mouse strains tested responded like BALB/c mice; the remaining three, like Cinnamon mice. Varying the quantity of antigen injected or the route of administration failed to stimulate the production of detectable complete Vi antibody in Cinnamon mice. Such antibody was evoked in these animals by multiple injections of Vi antigen or by inoculating them with Vi-containing bacilli or Vi-coated erythrocytes. The early protection afforded by serum from Vi-immunized BALB/c mice coincided with the appearance of incomplete Vi antibody, 1 day prior to the advent of complete antibody. Persistence of incomplete as well as complete antibody in the serum of immunized mice was demonstrated for at least 56 days after injection of 10 μg of Vi antigen. Incomplete Vi antibody was shown to have blocking ability, in vitro bactericidal activity, and the capability of protecting mice against intracerebral as well as intraperitoneal challenge with virulent typhoid bacilli. Production of incomplete and complete Vi antibodies was adversely affected by immunization with partially depolymerized Vi antigens. PMID:16562060

  17. Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas M.; Ricci, Anthony R.; Dahlbom, Magnus; Cherry, Simon R.; Hoffman, Edward T.

    1993-07-01

    The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.

  18. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  19. A Sequential Quadratic Programming Algorithm Using an Incomplete Solution of the Subproblem

    DTIC Science & Technology

    1990-09-01

    Electr6nica e Inform’itica Industrial E.T.S. Ingenieros Industriales Universidad Polit6cnica, Madrid Technical Report SOL 90-12 September 1990 -Y...MURRAY* AND FRANCISCO J. PRIETOt *Systems Optimization Laboratory Department of Operations Research Stanford University tDept. de Automitica, Ingenieria

  20. Barriers to Specialty Care and Specialty Referral Completion in the Community Health Center Setting

    PubMed Central

    Zuckerman, Katharine E.; Perrin, James M.; Hobrecker, Karin; Donelan, Karen

    2013-01-01

    Objective To assess the frequency of barriers to specialty care and to assess which barriers are associated with an incomplete specialty referral (not attending a specialty visit when referred by a primary care provider) among children seen in community health centers. Study design Two months after their child’s specialty referral, 341 parents completed telephone surveys assessing whether a specialty visit was completed and whether they experienced any of 10 barriers to care. Family/community barriers included difficulty leaving work, obtaining childcare, obtaining transportation, and inadequate insurance. Health care system barriers included getting appointments quickly, understanding doctors and nurses, communicating with doctors’ offices, locating offices, accessing interpreters, and inconvenient office hours. We calculated barrier frequency and total barriers experienced. Using logistic regression, we assessed which barriers were associated with incomplete referral, and whether experiencing ≥4 barriers was associated with incomplete referral. Results A total of 22.9% of families experienced incomplete referral. 42.0% of families encountered 1 or more barriers. The most frequent barriers were difficulty leaving work, obtaining childcare, and obtaining transportation. On multivariate analysis, difficulty getting appointments quickly, difficulty finding doctors’ offices, and inconvenient office hours were associated with incomplete referral. Families experiencing ≥4 barriers were more likely than those experiencing ≤3 barriers to have incomplete referral. Conclusion Barriers to specialty care were common and associated with incomplete referral. Families experiencing many barriers had greater risk of incomplete referral. Improving family/community factors may increase satisfaction with specialty care; however, improving health system factors may be the best way to reduce incomplete referrals. PMID:22929162

  1. A Review On Missing Value Estimation Using Imputation Algorithm

    NASA Astrophysics Data System (ADS)

    Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina

    2017-09-01

    The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.

  2. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  3. ILUBCG2-11: Solution of 11-banded nonsymmetric linear equation systems by a preconditioned biconjugate gradient routine

    NASA Astrophysics Data System (ADS)

    Chen, Y.-M.; Koniges, A. E.; Anderson, D. V.

    1989-10-01

    The biconjugate gradient method (BCG) provides an attractive alternative to the usual conjugate gradient algorithms for the solution of sparse systems of linear equations with nonsymmetric and indefinite matrix operators. A preconditioned algorithm is given, whose form resembles the incomplete L-U conjugate gradient scheme (ILUCG2) previously presented. Although the BCG scheme requires the storage of two additional vectors, it converges in a significantly lesser number of iterations (often half), while the number of calculations per iteration remains essentially the same.

  4. Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.

    1980-01-01

    The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).

  5. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, W.; Prieto, F.J.

    1993-05-01

    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less

  6. A Novel Discrete Differential Evolution Algorithm for the Vehicle Routing Problem in B2C E-Commerce

    NASA Astrophysics Data System (ADS)

    Xia, Chao; Sheng, Ying; Jiang, Zhong-Zhong; Tan, Chunqiao; Huang, Min; He, Yuanjian

    2015-12-01

    In this paper, a novel discrete differential evolution (DDE) algorithm is proposed to solve the vehicle routing problems (VRP) in B2C e-commerce, in which VRP is modeled by the incomplete graph based on the actual urban road system. First, a variant of classical VRP is described and a mathematical programming model for the variant is given. Second, the DDE is presented, where individuals are represented as the sequential encoding scheme, and a novel reparation operator is employed to repair the infeasible solutions. Furthermore, a FLOYD operator for dealing with the shortest route is embedded in the proposed DDE. Finally, an extensive computational study is carried out in comparison with the predatory search algorithm and genetic algorithm, and the results show that the proposed DDE is an effective algorithm for VRP in B2C e-commerce.

  7. Matching incomplete time series with dynamic time warping: an algorithm and an application to post-stroke rehabilitation.

    PubMed

    Tormene, Paolo; Giorgino, Toni; Quaglini, Silvana; Stefanelli, Mario

    2009-01-01

    The purpose of this study was to assess the performance of a real-time ("open-end") version of the dynamic time warping (DTW) algorithm for the recognition of motor exercises. Given a possibly incomplete input stream of data and a reference time series, the open-end DTW algorithm computes both the size of the prefix of reference which is best matched by the input, and the dissimilarity between the matched portions. The algorithm was used to provide real-time feedback to neurological patients undergoing motor rehabilitation. We acquired a dataset of multivariate time series from a sensorized long-sleeve shirt which contains 29 strain sensors distributed on the upper limb. Seven typical rehabilitation exercises were recorded in several variations, both correctly and incorrectly executed, and at various speeds, totaling a data set of 840 time series. Nearest-neighbour classifiers were built according to the outputs of open-end DTW alignments and their global counterparts on exercise pairs. The classifiers were also tested on well-known public datasets from heterogeneous domains. Nonparametric tests show that (1) on full time series the two algorithms achieve the same classification accuracy (p-value =0.32); (2) on partial time series, classifiers based on open-end DTW have a far higher accuracy (kappa=0.898 versus kappa=0.447;p<10(-5)); and (3) the prediction of the matched fraction follows closely the ground truth (root mean square <10%). The results hold for the motor rehabilitation and the other datasets tested, as well. The open-end variant of the DTW algorithm is suitable for the classification of truncated quantitative time series, even in the presence of noise. Early recognition and accurate class prediction can be achieved, provided that enough variance is available over the time span of the reference. Therefore, the proposed technique expands the use of DTW to a wider range of applications, such as real-time biofeedback systems.

  8. Simultaneous Tensor Decomposition and Completion Using Factor Priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark

    2013-08-27

    Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  9. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.

  10. On the inherent competition between valid and spurious inductive inferences in Boolean data

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.

  11. Methods to assess an exercise intervention trial based on 3-level functional data.

    PubMed

    Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J

    2015-10-01

    Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzmack, M.A.

    1993-07-01

    The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial startmore » of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range.« less

  13. Dehydration upon admission is a risk factor for incomplete recovery of renal function in children with haemolytic uremic syndrome.

    PubMed

    Ojeda, José M; Kohout, Isolda; Cuestas, Eduardo

    2013-01-01

    Haemolytic uremic syndrome (HUS) is the most common cause of acute renal failure and the second leading cause of chronic renal failure in children. The factors that affect incomplete renal function recovery prior to hospital admission are poorly understood. To analyse the risk factors that determine incomplete recovery of renal function prior to hospitalisation in children with HUS. A retrospective case-control study. age, sex, duration of diarrhoea, bloody stools, vomiting, fever, dehydration, previous use of antibiotics, and incomplete recovery of renal function (proteinuria, hypertension, reduced creatinine clearance, and chronic renal failure during follow-up). Patients of both sexes under 15 years of age were included. Of 36 patients, 23 were males (65.3%; 95%CI: 45.8 to 80.9), with an average age of 2.5 ± 1.4 years. Twenty-one patients required dialysis (58%; 95% CI: 40.8 to 75.8), and 13 (36.1%; 95% CI: 19.0 to 53.1) did not recover renal function. In the bivariate model, the only significant risk factor was dehydration (defined as weight loss >5%) [(OR: 5.3; 95% CI: 1.4 to 12.3; P=.0220]. In the multivariate analysis (Cox multiple regression), only dehydration was marginally significant (HR: 95.823; 95% CI: 93.175 to 109.948; P=.085). Our data suggest that dehydration prior to admission may be a factor that increases the risk of incomplete recovery of renal function during long-term follow-up in children who develop HUS D+. Consequently, in patients with diarrhoea who are at risk of HUS, dehydration should be strongly avoided during outpatient care to preserve long-term renal function. These results must be confirmed by larger prospective studies.

  14. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  15. Algorithm for solving of two-level hierarchical minimax program control problem of final state the regional socio-economic system in the presence of risks

    NASA Astrophysics Data System (ADS)

    Shorikov, A. F.

    2017-10-01

    In this paper we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final state of this process with incomplete information. For solving of its problem we constructed the common algorithm that has a form of a recurrent procedure of solving a linear programming and a finite optimization problems.

  16. Safety assessment for In-service Pressure Bending Pipe Containing Incomplete Penetration Defects

    NASA Astrophysics Data System (ADS)

    Wang, M.; Tang, P.; Xia, J. F.; Ling, Z. W.; Cai, G. Y.

    2017-12-01

    Incomplete penetration defect is a common defect in the welded joint of pressure pipes. While the safety classification of pressure pipe containing incomplete penetration defects, according to periodical inspection regulations in present, is more conservative. For reducing the repair of incomplete penetration defect, a scientific and applicable safety assessment method for pressure pipe is needed. In this paper, the stress analysis model of the pipe system was established for the in-service pressure bending pipe containing incomplete penetration defects. The local finite element model was set up to analyze the stress distribution of defect location and the stress linearization. And then, the applicability of two assessment methods, simplified assessment and U factor assessment method, to the assessment of incomplete penetration defects located at pressure bending pipe were analyzed. The results can provide some technical supports for the safety assessment of complex pipelines in the future.

  17. Incompletely characterized incidental renal masses: emerging data support conservative management.

    PubMed

    Silverman, Stuart G; Israel, Gary M; Trinh, Quoc-Dien

    2015-04-01

    With imaging, most incidental renal masses can be diagnosed promptly and with confidence as being either benign or malignant. For those that cannot, management recommendations can be devised on the basis of a thorough evaluation of imaging features. However, most renal masses are either too small to characterize completely or are detected initially in imaging examinations that are not designed for full evaluation of them. These masses constitute a group of masses that are considered incompletely characterized. On the basis of current published guidelines, many masses warrant additional imaging. However, while the diagnosis of renal cancer at a curable stage remains the first priority, there is the additional need to reduce unnecessary healthcare costs and radiation exposure. As such, emerging data now support foregoing additional imaging for many incompletely characterized renal masses. These data include the low risk of progression to metastases or death for small renal masses that have undergone active surveillance (including biopsy-proven cancers) and a better understanding of how specific imaging features can be used to diagnose their origins. These developments support (a) avoidance of imaging entirely for those incompletely characterized renal masses that are highly likely to be benign cysts and (b) delay of further imaging of small solid masses in selected patients. Although more evidence-based data are needed and comprehensive management algorithms have yet to be defined, these recommendations are medically appropriate and practical, while limiting the imaging of many incompletely characterized incidental renal masses.

  18. Handling Different Spatial Resolutions in Image Fusion by Multivariate Curve Resolution-Alternating Least Squares for Incomplete Image Multisets.

    PubMed

    Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna

    2018-06-05

    Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.

  19. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  20. A sonification algorithm for developing the off-roads models for driving simulators

    NASA Astrophysics Data System (ADS)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  1. Identification of Hospitalizations for Intentional Self-Harm when E-Codes are Incompletely Recorded

    PubMed Central

    Patrick, Amanda R.; Miller, Matthew; Barber, Catherine W.; Wang, Philip S.; Canning, Claire F.; Schneeweiss, Sebastian

    2010-01-01

    Context Suicidal behavior has gained attention as an adverse outcome of prescription drug use. Hospitalizations for intentional self-harm, including suicide, can be identified in administrative claims databases using external cause of injury codes (E-codes). However, rates of E-code completeness in US government and commercial claims databases are low due to issues with hospital billing software. Objective To develop an algorithm to identify intentional self-harm hospitalizations using recorded injury and psychiatric diagnosis codes in the absence of E-code reporting. Methods We sampled hospitalizations with an injury diagnosis (ICD-9 800–995) from 2 databases with high rates of E-coding completeness: 1999–2001 British Columbia, Canada data and the 2004 U.S. Nationwide Inpatient Sample. Our gold standard for intentional self-harm was a diagnosis of E950-E958. We constructed algorithms to identify these hospitalizations using information on type of injury and presence of specific psychiatric diagnoses. Results The algorithm that identified intentional self-harm hospitalizations with high sensitivity and specificity was a diagnosis of poisoning; toxic effects; open wound to elbow, wrist, or forearm; or asphyxiation; plus a diagnosis of depression, mania, personality disorder, psychotic disorder, or adjustment reaction. This had a sensitivity of 63%, specificity of 99% and positive predictive value (PPV) of 86% in the Canadian database. Values in the US data were 74%, 98%, and 73%. PPV was highest (80%) in patients under 25 and lowest those over 65 (44%). Conclusions The proposed algorithm may be useful for researchers attempting to study intentional self-harm in claims databases with incomplete E-code reporting, especially among younger populations. PMID:20922709

  2. Distributed topology control algorithm for multihop wireless netoworks

    NASA Technical Reports Server (NTRS)

    Borbash, S. A.; Jennings, E. H.

    2002-01-01

    We present a network initialization algorithmfor wireless networks with distributed intelligence. Each node (agent) has only local, incomplete knowledge and it must make local decisions to meet a predefined global objective. Our objective is to use power control to establish a topology based onthe relative neighborhood graph which has good overall performance in terms of power usage, low interference, and reliability.

  3. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Alternating least squares with penalty functions.

    PubMed

    Gemperline, Paul J; Cash, Eric

    2003-08-15

    A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.

  4. Multiple imputation by chained equations for systematically and sporadically missing multilevel data.

    PubMed

    Resche-Rigon, Matthieu; White, Ian R

    2018-06-01

    In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.

  5. A resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Guan, Huifeng; Anastasio, Mark A.

    2017-03-01

    It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.

  6. Maximum likelihood positioning algorithm for high-resolution PET scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick

    2016-06-15

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less

  7. A novel multisensor traffic state assessment system based on incomplete data.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang

    2014-01-01

    A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system.

  8. A Novel Multisensor Traffic State Assessment System Based on Incomplete Data

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang

    2014-01-01

    A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system. PMID:25162055

  9. Novel Approach to Classify Plants Based on Metabolite-Content Similarity.

    PubMed

    Liu, Kang; Abdullah, Azian Azamimi; Huang, Ming; Nishioka, Takaaki; Altaf-Ul-Amin, Md; Kanaya, Shigehiko

    2017-01-01

    Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes). This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward's method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.

  10. Novel Approach to Classify Plants Based on Metabolite-Content Similarity

    PubMed Central

    Abdullah, Azian Azamimi; Huang, Ming; Nishioka, Takaaki

    2017-01-01

    Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes). This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward's method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations. PMID:28164123

  11. An evaluation of computer assisted clinical classification algorithms.

    PubMed

    Chute, C G; Yang, Y; Buntrock, J

    1994-01-01

    The Mayo Clinic has a long tradition of indexing patient records in high resolution and volume. Several algorithms have been developed which promise to help human coders in the classification process. We evaluate variations on code browsers and free text indexing systems with respect to their speed and error rates in our production environment. The more sophisticated indexing systems save measurable time in the coding process, but suffer from incompleteness which requires a back-up system or human verification. Expert Network does the best job of rank ordering clinical text, potentially enabling the creation of thresholds for the pass through of computer coded data without human review.

  12. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  13. New Directions in the Digital Signal Processing of Image Data.

    DTIC Science & Technology

    1987-05-01

    and identify by block number) FIELD GROUP SUB-GROUP Object detection and idLntification 12 01 restoration of photon noise limited imagery 15 04 image...from incomplete information, restoration of blurred images in additive and multiplicative noise , motion analysis with fast hierarchical algorithms...different resolutions. As is well known, the solution to the matched filter problem under additive white noise conditions is the correlation receiver

  14. Purpose-Driven Communities in Multiplex Networks: Thresholding User-Engaged Layer Aggregation

    DTIC Science & Technology

    2016-06-01

    dark networks is a non-trivial yet useful task. Because terrorists work hard to hide their relationships/network, analysts have an incomplete picture...them identify meaningful terrorist communities. This thesis introduces a general-purpose algorithm for community detection in multiplex dark networks...aggregation, dark networks, conductance, cluster adequacy, mod- ularity, Louvain method, shortest path interdiction 15. NUMBER OF PAGES 155 16. PRICE CODE

  15. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  16. Real-time stylistic prediction for whole-body human motions.

    PubMed

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. An online input force time history reconstruction algorithm using dynamic principal component analysis

    NASA Astrophysics Data System (ADS)

    Prawin, J.; Rama Mohan Rao, A.

    2018-01-01

    The knowledge of dynamic loads acting on a structure is always required for many practical engineering problems, such as structural strength analysis, health monitoring and fault diagnosis, and vibration isolation. In this paper, we present an online input force time history reconstruction algorithm using Dynamic Principal Component Analysis (DPCA) from the acceleration time history response measurements using moving windows. We also present an optimal sensor placement algorithm to place limited sensors at dynamically sensitive spatial locations. The major advantage of the proposed input force identification algorithm is that it does not require finite element idealization of structure unlike the earlier formulations and therefore free from physical modelling errors. We have considered three numerical examples to validate the accuracy of the proposed DPCA based method. Effects of measurement noise, multiple force identification, different kinds of loading, incomplete measurements, and high noise levels are investigated in detail. Parametric studies have been carried out to arrive at optimal window size and also the percentage of window overlap. Studies presented in this paper clearly establish the merits of the proposed algorithm for online load identification.

  18. Linear and nonlinear trending and prediction for AVHRR time series data

    NASA Technical Reports Server (NTRS)

    Smid, J.; Volf, P.; Slama, M.; Palus, M.

    1995-01-01

    The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.

  19. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  20. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.

    2016-04-01

    Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.

  1. Simultaneous tensor decomposition and completion using factor priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  2. Power calculations for likelihood ratio tests for offspring genotype risks, maternal effects, and parent-of-origin (POO) effects in the presence of missing parental genotypes when unaffected siblings are available.

    PubMed

    Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R

    2007-01-01

    Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.

  3. OrthoMCL: Identification of Ortholog Groups for Eukaryotic Genomes

    PubMed Central

    Li, Li; Stoeckert, Christian J.; Roos, David S.

    2003-01-01

    The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of “recent” paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. PMID:12952885

  4. 77 FR 24436 - Approval and Promulgation of Air Quality Implementation Plans; Wisconsin; Milwaukee-Racine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-24

    .... How did EPA address missing data? V. Proposed Action VI. What is the effect of this action? VII.... ** Indicates incomplete data due to monitor shut down. IV. How did EPA address missing data? Appendix N of 40... in Milwaukee, where there are missing or incomplete data due to monitor shutdown or other factors...

  5. Effects of incomplete mixing on reactive transport in flows through heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wright, Elise E.; Richter, David H.; Bolster, Diogo

    2017-11-01

    The phenomenon of incomplete mixing reduces bulk effective reaction rates in reactive transport. Many existing models do not account for these effects, resulting in the overestimation of reaction rates in laboratory and field settings. To date, most studies on incomplete mixing have focused on diffusive systems; here, we extend these to explore the role that flow heterogeneity has on incomplete mixing. To do this, we examine reactive transport using a Lagrangian reactive particle tracking algorithm in two-dimensional idealized heterogeneous porous media. Contingent on the nondimensional Peclet and Damköhler numbers in the system, it was found that near well-mixed behavior could be observed at late times in the heterogeneous flow field simulations. We look at three common flow deformation metrics that describe the enhancement of mixing in the flow due to velocity gradients: the Okubo-Weiss parameter (θ ), the largest eigenvalue of the Cauchy-Green strain tensor (λC), and the finite-time Lyapunov exponent (Λ ). Strong mixing regions in the heterogeneous flow field identified by these metrics were found to correspond to regions with higher numbers of reactions, but the infrequency of these regions compared to the large numbers of reactions occurring elsewhere in the domain imply that these strong mixing regions are insufficient in explaining the observed near well-mixed behavior. Since it was found that reactive transport in these heterogeneous flows could overcome the effects of incomplete mixing, we also search for a closure for the mean concentration. The conservative quantity u2¯, where u =CA-CB , was found to predict the late time scaling of the mean concentration, i.e., Ci¯˜u2¯ .

  6. Factors influencing U.S. canine heartworm (Dirofilaria immitis) prevalence.

    PubMed

    Wang, Dongmei; Bowman, Dwight D; Brown, Heidi E; Harrington, Laura C; Kaufman, Phillip E; McKay, Tanja; Nelson, Charles Thomas; Sharp, Julia L; Lund, Robert

    2014-06-06

    This paper examines the individual factors that influence prevalence rates of canine heartworm in the contiguous United States. A data set provided by the Companion Animal Parasite Council, which contains county-by-county results of over nine million heartworm tests conducted during 2011 and 2012, is analyzed for predictive structure. The goal is to identify the factors that are important in predicting high canine heartworm prevalence rates. The factors considered in this study are those envisioned to impact whether a dog is likely to have heartworm. The factors include climate conditions (annual temperature, precipitation, and relative humidity), socio-economic conditions (population density, household income), local topography (surface water and forestation coverage, elevation), and vector presence (several mosquito species). A baseline heartworm prevalence map is constructed using estimated proportions of positive tests in each county of the United States. A smoothing algorithm is employed to remove localized small-scale variation and highlight large-scale structures of the prevalence rates. Logistic regression is used to identify significant factors for predicting heartworm prevalence. All of the examined factors have power in predicting heartworm prevalence, including median household income, annual temperature, county elevation, and presence of the mosquitoes Aedes trivittatus, Aedes sierrensis and Culex quinquefasciatus. Interactions among factors also exist. The factors identified are significant in predicting heartworm prevalence. The factor list is likely incomplete due to data deficiencies. For example, coyotes and feral dogs are known reservoirs of heartworm infection. Unfortunately, no complete data of their populations were available. The regression model considered is currently being explored to forecast future values of heartworm prevalence.

  7. Factors influencing U.S. canine heartworm (Dirofilaria immitis) prevalence

    PubMed Central

    2014-01-01

    Background This paper examines the individual factors that influence prevalence rates of canine heartworm in the contiguous United States. A data set provided by the Companion Animal Parasite Council, which contains county-by-county results of over nine million heartworm tests conducted during 2011 and 2012, is analyzed for predictive structure. The goal is to identify the factors that are important in predicting high canine heartworm prevalence rates. Methods The factors considered in this study are those envisioned to impact whether a dog is likely to have heartworm. The factors include climate conditions (annual temperature, precipitation, and relative humidity), socio-economic conditions (population density, household income), local topography (surface water and forestation coverage, elevation), and vector presence (several mosquito species). A baseline heartworm prevalence map is constructed using estimated proportions of positive tests in each county of the United States. A smoothing algorithm is employed to remove localized small-scale variation and highlight large-scale structures of the prevalence rates. Logistic regression is used to identify significant factors for predicting heartworm prevalence. Results All of the examined factors have power in predicting heartworm prevalence, including median household income, annual temperature, county elevation, and presence of the mosquitoes Aedes trivittatus, Aedes sierrensis and Culex quinquefasciatus. Interactions among factors also exist. Conclusions The factors identified are significant in predicting heartworm prevalence. The factor list is likely incomplete due to data deficiencies. For example, coyotes and feral dogs are known reservoirs of heartworm infection. Unfortunately, no complete data of their populations were available. The regression model considered is currently being explored to forecast future values of heartworm prevalence. PMID:24906567

  8. Robust pulmonary lobe segmentation against incomplete fissures

    NASA Astrophysics Data System (ADS)

    Gu, Suicheng; Zheng, Qingfeng; Siegfried, Jill; Pu, Jiantao

    2012-03-01

    As important anatomical landmarks of the human lung, accurate lobe segmentation may be useful for characterizing specific lung diseases (e.g., inflammatory, granulomatous, and neoplastic diseases). A number of investigations showed that pulmonary fissures were often incomplete in image depiction, thereby leading to the computerized identification of individual lobes a challenging task. Our purpose is to develop a fully automated algorithm for accurate identification of individual lobes regardless of the integrity of pulmonary fissures. The underlying idea of the developed lobe segmentation scheme is to use piecewise planes to approximate the detected fissures. After a rotation and a global smoothing, a number of small planes were fitted using local fissures points. The local surfaces are finally combined for lobe segmentation using a quadratic B-spline weighting strategy to assure that the segmentation is smooth. The performance of the developed scheme was assessed by comparing with a manually created reference standard on a dataset of 30 lung CT examinations. These examinations covered a number of lung diseases and were selected from a large chronic obstructive pulmonary disease (COPD) dataset. The results indicate that our scheme of lobe segmentation is efficient and accurate against incomplete fissures.

  9. Variation in spectral response of soybeans with respect to illumination, view, and canopy geometry

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Biehl, L. L.; Bauer, M. E.

    1984-01-01

    Comparisons of the spectral response for incomplete (well-defined row structure) and complete (overlapping row structure) canopies of soybeans indicated a greater dependence on Sun and view geometry for the incomplete canopies. Red and near-IR reflectance for the incomplete canopy decreased as solar zenith angle increased for a nadir view angle until the soil between the plant rows was completely shaded. Thereafter for increasing solar zenith angle, the red reflectance leveled off and the near-IR reflectance increased. A 'hot spot' effect was evident for the red and near-IR reflectance factors. The 'hot spot' effect was more pronounced for the red band based on relative reflectance value changes. The ratios of off-nadir to nadir acquired data reveal that off-nadir red band reflectance factors more closely approximated straightdown measurements for time periods away from solar noon. Normalized difference generally approximated straightdown measurements during the middle portion of the day.

  10. Multifractal Detrended Fluctuation Analysis of Regional Precipitation Sequences Based on the CEEMDAN-WPT

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Cheng, Chen; Fu, Qiang; Liu, Chunlei; Li, Mo; Faiz, Muhammad Abrar; Li, Tianxiao; Khan, Muhammad Imran; Cui, Song

    2018-03-01

    In this paper, the complete ensemble empirical mode decomposition with the adaptive noise (CEEMDAN) algorithm is introduced into the complexity research of precipitation systems to improve the traditional complexity measure method specific to the mode mixing of the Empirical Mode Decomposition (EMD) and incomplete decomposition of the ensemble empirical mode decomposition (EEMD). We combined the CEEMDAN with the wavelet packet transform (WPT) and multifractal detrended fluctuation analysis (MF-DFA) to create the CEEMDAN-WPT-MFDFA, and used it to measure the complexity of the monthly precipitation sequence of 12 sub-regions in Harbin, Heilongjiang Province, China. The results show that there are significant differences in the monthly precipitation complexity of each sub-region in Harbin. The complexity of the northwest area of Harbin is the lowest and its predictability is the best. The complexity and predictability of the middle and Midwest areas of Harbin are about average. The complexity of the southeast area of Harbin is higher than that of the northwest, middle, and Midwest areas of Harbin and its predictability is worse. The complexity of Shuangcheng is the highest and its predictability is the worst of all the studied sub-regions. We used terrain and human activity as factors to analyze the causes of the complexity of the local precipitation. The results showed that the correlations between the precipitation complexity and terrain are obvious, and the correlations between the precipitation complexity and human influence factors vary. The distribution of the precipitation complexity in this area may be generated by the superposition effect of human activities and natural factors such as terrain, general atmospheric circulation, land and sea location, and ocean currents. To evaluate the stability of the algorithm, the CEEMDAN-WPT-MFDFA was compared with the equal probability coarse graining LZC algorithm, fuzzy entropy, and wavelet entropy. The results show that the CEEMDAN-WPT-MFDFA was more stable than 3 contrast methods under the influence of white noise and colored noise, which proves that the CEEMDAN-WPT-MFDFA has a strong robustness under the influence of noise.

  11. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  12. Empirical estimation of a distribution function with truncated and doubly interval-censored data and its application to AIDS studies.

    PubMed

    Sun, J

    1995-09-01

    In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.

  13. Monitoring Oilfield Operations and GHG Emissions Sources Using Object-based Image Analysis of High Resolution Spatial Imagery

    NASA Astrophysics Data System (ADS)

    Englander, J. G.; Brodrick, P. G.; Brandt, A. R.

    2015-12-01

    Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.

  14. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  15. Recursive Factorization of the Inverse Overlap Matrix in Linear Scaling Quantum Molecular Dynamics Simulations

    DOE PAGES

    Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...

    2016-06-06

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less

  16. Segmentation of neuronal structures using SARSA (λ)-based boundary amendment with reinforced gradient-descent curve shape fitting.

    PubMed

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.

  17. Segmentation of Neuronal Structures Using SARSA (λ)-Based Boundary Amendment with Reinforced Gradient-Descent Curve Shape Fitting

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699

  18. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, James C., E-mail: jross@bwh.harvard.edu; Surgical Planning Lab, Brigham and Women's Hospital, Boston, Massachusetts 02215; Laboratory of Mathematics in Imaging, Brigham and Women's Hospital, Boston, Massachusetts 02126

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and amore » novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed algorithm is effective for lung lobe segmentation in absence of auxiliary structures such as vessels and airways. The most challenging cases are those with mostly incomplete, absent, or near-absent fissures and in cases with poorly revealed fissures due to high image noise. However, the authors observe good performance even in the majority of these cases.« less

  19. Nature Disaster Risk Evaluation with a Group Decision Making Method Based on Incomplete Hesitant Fuzzy Linguistic Preference Relations.

    PubMed

    Tang, Ming; Liao, Huchang; Li, Zongmin; Xu, Zeshui

    2018-04-13

    Because the natural disaster system is a very comprehensive and large system, the disaster reduction scheme must rely on risk analysis. Experts' knowledge and experiences play a critical role in disaster risk assessment. The hesitant fuzzy linguistic preference relation is an effective tool to express experts' preference information when comparing pairwise alternatives. Owing to the lack of knowledge or a heavy workload, information may be missed in the hesitant fuzzy linguistic preference relation. Thus, an incomplete hesitant fuzzy linguistic preference relation is constructed. In this paper, we firstly discuss some properties of the additive consistent hesitant fuzzy linguistic preference relation. Next, the incomplete hesitant fuzzy linguistic preference relation, the normalized hesitant fuzzy linguistic preference relation, and the acceptable hesitant fuzzy linguistic preference relation are defined. Afterwards, three procedures to estimate the missing information are proposed. The first one deals with the situation in which there are only n-1 known judgments involving all the alternatives; the second one is used to estimate the missing information of the hesitant fuzzy linguistic preference relation with more known judgments; while the third procedure is used to deal with ignorance situations in which there is at least one alternative with totally missing information. Furthermore, an algorithm for group decision making with incomplete hesitant fuzzy linguistic preference relations is given. Finally, we illustrate our model with a case study about flood disaster risk evaluation. A comparative analysis is presented to testify the advantage of our method.

  20. Incomplete colonoscopy: Maximizing completion rates of gastroenterologists

    PubMed Central

    Brahmania, Mayur; Park, Jei; Svarta, Sigrid; Tong, Jessica; Kwok, Ricky; Enns, Robert

    2012-01-01

    BACKGROUND Cecal intubation is one of the goals of a quality colonoscopy; however, many factors increasing the risk of incomplete colonoscopy have been implicated. The implications of missed pathology and the demand on health care resources for return colonoscopies pose a conundrum to many physicians. The optimal course of action after incomplete colonoscopy is unclear. OBJECTIVES: To assess endoscopic completion rates of previously incomplete colonoscopies, the methods used to complete them and the factors that led to the previous incomplete procedure. METHODS: All patients who previously underwent incomplete colonoscopy (2005 to 2010) and were referred to St Paul’s Hospital (Vancouver, British Columbia) were evaluated. Colonoscopies were re-attempted by a single endoscopist. Patient charts were reviewed retrospectively. RESULTS: A total of 90 patients (29 males) with a mean (± SD) age of 58±13.2 years were included in the analysis. Thirty patients (33%) had their initial colonoscopy performed by a gastroenterologist. Indications for initial colonoscopy included surveillance or screening (23%), abdominal pain (15%), gastrointestinal bleeding (29%), change in bowel habits or constitutional symptoms (18%), anemia (7%) and chronic diarrhea (8%). Reasons for incomplete colonoscopy included poor preparation (11%), pain or inadequate sedation (16%), tortuous colon (30%), diverticular disease (6%), obstructing mass (6%) and stricturing disease (10%). Reasons for incomplete procedures in the remaining 21% of patients were not reported by the referring physician. Eighty-seven (97%) colonoscopies were subsequently completed in a single attempt at the institution. Seventy-six (84%) colonoscopies were performed using routine manoeuvres, patient positioning and a variable-stiffness colonoscope (either standard or pediatric). A standard 160 or 180 series Olympus gastroscope (Olympus, Japan) was used in five patients (6%) to navigate through sigmoid diverticular disease; a pediatric colonoscope was used in six patients (7%) for similar reasons. Repeat colonoscopy on the remaining three patients (3%) failed: all three required surgery for strictures (two had obstructing malignant masses and one had a severe benign obstructing sigmoid diverticular stricture). CONCLUSION: Most patients with previous incomplete colonoscopy can undergo a successful repeat colonoscopy at a tertiary care centre with instruments that are readily available to most gastroenterologists. Other modalities for evaluation of the colon should be deferred until a second attempt is made at an expert centre. PMID:22993727

  1. Factors contributing to nursing task incompletion as perceived by nurses working in Kuwait general hospitals.

    PubMed

    Al-Kandari, Fatimah; Thomas, Deepa

    2009-12-01

    Unfinished care has a strong relationship with quality of nursing care. Most issues related to tasks incompletion arise from staffing and workload. This study was conducted to assess the workload of nurses, the nursing activities (tasks) nurses commonly performed on medical and surgical wards, elements of nursing care activities left incomplete by nurses during a shift, factors contributing to task incompletion and the relationship between staffing, demographic variables and task incompletion. Exploratory survey using a self-administered questionnaire developed from IHOC survey, USA. All full time registered nurses working on the general medical and surgical wards of five government general hospitals in Kuwait. Research assistants distributed and collected back the questionnaires. Four working days were given to participants to complete and return the questionnaires. A total of 820 questionnaires were distributed and 95% were returned. Descriptive and inferential analysis using SPSS-11. The five most frequently performed nursing activities were: administration of medications, assessing patient condition, preparing/updating nursing care plans, close patient monitoring and client health teaching. The most common nursing activities nurses were unable to complete were: comfort talk with patient and family, adequate documentation of nursing care, oral hygiene, routine catheter care and starting or changing IV fluid on time. Tasks were more complete when the nurse-patient load was less than 5. Nurses' age and educational background influenced task completion while nurses' gender had no influence on it. Increased patient loads, resulting in increased frequency of nursing tasks and non-nursing tasks, were positively correlated to incompletion of nursing activities during the shift. Emphasis should be given to maintaining the optimum nurse-patient load and decreasing the non-nursing workload of nurses to enhance the quality of nursing care.

  2. Holographic interferometry of transparent media with reflection from imbedded test objects

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1981-01-01

    In applying holographic interferometry, opaque objects blocking a portion of the optical beam used to form the interferogram give rise to incomplete data for standard computer tomography algorithms. An experimental technique for circumventing the problem of data blocked by opaque objects is presented. The missing data are completed by forming an interferogram using light backscattered from the opaque object, which is assumed to be diffuse. The problem of fringe localization is considered.

  3. FLIPPER: Validation for Remote Ocean Imaging

    NASA Technical Reports Server (NTRS)

    2006-01-01

    one of the determining factors in the planet s ability to support life is the same factor that makes the Blue Planet blue: water. Therefore, NASA researchers have a focused interest in understanding Earth s oceans and their ability to continue sustaining life. A critical objective in this study is to understand the global processes that control the changes of carbon and associated living elements in the oceans. Since oceans are so large, one of the most widely used methods of this research is remote sensing, using satellites to observe changes in the ocean color that may be indicative of changes occurring at the surface. Major changes in carbon are due to photosynthesis conducted by phytoplankton, showing, among other things, which areas are sustaining life. Although valuable for large-scale pictures of an ocean, remote sensing really only provides a surface, and therefore incomplete, depiction of that ocean s sustainability. True and complete testing of the water requires local testing in conjunction with the satellite images in order to generate the necessary algorithm parameters to calculate ocean health. For this reason, NASA has spearheaded research to provide onsite validation for its satellite imagery surveys.

  4. A Policy Representation Using Weighted Multiple Normal Distribution

    NASA Astrophysics Data System (ADS)

    Kimura, Hajime; Aramaki, Takeshi; Kobayashi, Shigenobu

    In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.

  5. Risk factors for incomplete immunization in children with HIV infection.

    PubMed

    Bhattacharya, Sangeeta Das; Bhattacharyya, Subhasish; Chatterjee, Devlina; Niyogi, Swapan Kumar; Chauhan, Nageshwar; Sudar, A

    2014-09-01

    To document the immunization rates, factors associated with incomplete immunization, and missed opportunities for immunizations in children affected by HIV presenting for routine outpatient follow-up. A cross-sectional study of immunization status of children affected by HIV presenting for routine outpatient care was conducted. Two hundred and six HIV affected children were enrolled. The median age of children in this cohort was 6 y. One hundred ninety seven of 206 children were HIV infected, nine were HIV exposed, but indeterminate. Fifty (25 %) children had incomplete immunizations per the Universal Immunization Program (UIP) of India. Hundred percent of children had received OPV. Ninety three percent of children got their UIP vaccines from a government clinic. Children with incomplete immunization were older, median age of 8 compared to 5 (p = 0.003). Each year of maternal education increased the odds of having a child with complete UIP immunizations by 1.18 (p = 0.008)-children of mothers with 6 y of education compared to those with no education were seven times more likely to have complete UIP vaccine status. The average number of visits to the clinic by an individual child in a year was 4. This represents 200 missed opportunities for immunizations. HIV infected children are at risk for incomplete immunization coverage though they regularly access medical care. Including routine immunizations, particularly catch-up immunizations in programs for HIV infected children maybe an effective way of protecting these children from vaccine preventable disease.

  6. Predictors of incomplete immunization coverage among one to five years old children in Togo.

    PubMed

    Landoh, Dadja Essoya; Ouro-Kavalah, Farihétou; Yaya, Issifou; Kahn, Anna-Lea; Wasswa, Peter; Lacle, Anani; Nassoury, Danladi Ibrahim; Gitta, Sheba Nakacubo; Soura, Abdramane Bassiahi

    2016-09-13

    Incompleteness of vaccination coverage among children is a major public health concern because itcontinues to sustain a high prevalence of vaccine-preventable diseases in some countries. In Togo, very few data on the factors associated with incomplete vaccination coverage among children have been published. We determined the prevalence of incomplete immunization coverage in children aged one to five years in Togo and associated factors. This was a cross-sectional study using secondary data from the 2010 Multiple Indicator Cluster Surveys (MICS4) conducted in 2010 among children aged 1 to 5 years in Togo. This survey was conducted over a period of two months from September to November, 2010. During Togo'sMICS4 survey, 2067 children met the inclusion criteria for our study. Female children accounted for 50.9 % (1051/2067) of the sample and 1372 (66.4 %) lived in rural areas. The majority of children (92.2 %; 1905/2067) lived with both parents and 30 % of the head of households interviewed were not schooled (620/2067). At the time of the survey, 36.2 % (750/2067) of the children had not received all vaccines recommended by Expanded Program on Immunization (EPI). In multivariate analysis, factors associated with incompleteness of immunization at 1 year were: health region of residences (Maritime aOR = 0.650; p = 0.043; Savanes: aOR = 0.324; p <0.001), non-schooled mother (aOR = 1.725; p = 0.002),standard of living (poor: aOR = 1.668; p = 0.013; medium: aOR = 1.393; p = 0.090) and the following characteristics of the household heads: sex (aOR = 1.465; p = 0.034), marital status (aOR = 1.591; p = 0.032), education level(non-educated: aOR = 1.435; p = 0.027. The incomplete immunization coverage among children in Togo remains high. It is necessary to strengthen health promotion among the population in order to improve the use of immunization services that are essential to reduce morbidity and mortality among under five years old children.

  7. Predictive searching algorithm for Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei

    2017-12-01

    By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.

  8. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  9. Comparing Chalk With Cheese-The EGG Contact Quotient Is Only a Limited Surrogate of the Closed Quotient.

    PubMed

    Herbst, Christian T; Schutte, Harm K; Bowling, Daniel L; Svec, Jan G

    2017-07-01

    The electroglottographic (EGG) contact quotient (CQegg), an estimate of the relative duration of vocal fold contact per vibratory cycle, is the most commonly used quantitative analysis parameter in EGG. The purpose of this study is to quantify the CQegg's relation to the closed quotient, a measure more directly related to glottal width changes during vocal fold vibration and the respective sound generation events. Thirteen singers (six females) phonated in four extreme phonation types while independently varying the degree of breathiness and vocal register. EGG recordings were complemented by simultaneous videokymographic (VKG) endoscopy, which allows for calculation of the VKG closed quotient (CQvkg). The CQegg was computed with five different algorithms, all used in previous research. All CQegg algorithms produced CQegg values that clearly differed from the respective CQvkg, with standard deviations around 20% of cycle duration. The difference between CQvkg and CQegg was generally greater for phonations with lower CQvkg. The largest differences were found for low-quality EGG signals with a signal-to-noise ratio below 10 dB, typically stemming from phonations with incomplete glottal closure. Disregarding those low-quality signals, we found the best match between CQegg and CQvkg for a CQegg algorithm operating on the first derivative of the EGG signal. These results show that the terms "closed quotient" and "contact quotient" should not be used interchangeably. They relate to different physiological phenomena. Phonations with incomplete glottal closure having an EGG signal-to-noise ratio below 10 dB are not suited for CQegg analysis. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Sensitivity of low-energy incomplete fusion to various entrance-channel parameters

    NASA Astrophysics Data System (ADS)

    Kumar, Harish; Tali, Suhail A.; Afzal Ansari, M.; Singh, D.; Ali, Rahbar; Kumar, Kamal; Sathik, N. P. M.; Ali, Asif; Parashari, Siddharth; Dubey, R.; Bala, Indu; Kumar, R.; Singh, R. P.; Muralithar, S.

    2018-03-01

    The disentangling of incomplete fusion dependence on various entrance channel parameters has been made from the forward recoil range distribution measurement for the 12C+175Lu system at ≈ 88 MeV energy. It gives the direct measure of full and/or partial linear momentum transfer from the projectile to the target nucleus. The comparison of observed recoil ranges with theoretical ranges calculated using the code SRIM infers the production of evaporation residues via complete and/or incomplete fusion process. Present results show that incomplete fusion process contributes significantly in the production of α xn and 2α xn emission channels. The deduced incomplete fusion probability (F_{ICF}) is compared with that obtained for systems available in the literature. An interesting behavior of F_{ICF} with ZP ZT is observed in the reinvestigation of incomplete fusion dependency with the Coulomb factor (ZPZT), contrary to the recent observations. The present results based on (ZPZT) are found in good agreement with recent observations of our group. A larger F_{ICF} value for 12C induced reactions is found than that for 13C, although both have the same ZPZT. A nonsystematic behavior of the incomplete fusion process with the target deformation parameter (β2) is observed, which is further correlated with a new parameter (ZP ZT . β2). The projectile α -Q-value is found to explain more clearly the discrepancy observed in incomplete fusion dependency with parameters ( ZPZT) and (ZP ZT . β2). It may be pointed out that any single entrance channel parameter (mass-asymmetry or (ZPZT) or β2 or projectile α-Q-value) may not be able to explain completely the incomplete fusion process.

  11. Hepatitis B vaccination coverage and risk factors associated with incomplete vaccination of children born to hepatitis B surface antigen-positive mothers, Denmark, 2006 to 2010.

    PubMed

    Kunoee, Asja; Nielsen, Jens; Cowan, Susan

    2016-01-01

    In Denmark, universal screening of pregnant women for hepatitis B has been in place since November 2005, with the first two years as a trial period with enhanced surveillance. It is unknown what the change to universal screening without enhanced surveillance has meant for vaccination coverage among children born to hepatitis B surface antigen (HBsAg)-positive mothers and what risk factors exist for incomplete vaccination. This retrospective cohort study included 699 children of mothers positive for HBsAg. Information on vaccination and risk factors was collected from central registers. In total, 93% (651/699) of the children were vaccinated within 48 hours of birth, with considerable variation between birthplaces. Only 64% (306/475) of the children had received all four vaccinations through their general practitioner (GP) at the age of two years, and 10% (47/475) of the children had received no hepatitis B vaccinations at all. Enhanced surveillance was correlated positively with coverage of birth vaccination but not with coverage at the GP. No or few prenatal examinations were a risk factor for incomplete vaccination at the GP. Maternity wards and GPs are encouraged to revise their vaccination procedures and routines for pregnant women, mothers with chronic HBV infection and their children.

  12. DREAMING OF ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk

    Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as themore » “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.« less

  13. Reconstruction of financial networks for robust estimation of systemic risk

    NASA Astrophysics Data System (ADS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  14. Exploratory Item Classification Via Spectral Graph Clustering

    PubMed Central

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2017-01-01

    Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476

  15. Consequences of incomplete surface energy balance closure for CO2 fluxes from open-path CO2/H2O infrared gas analyzers

    Treesearch

    Heping Liu; James T. Randerson; Jamie Lindfors; William J. Massman; Thomas Foken

    2006-01-01

    We present an approach for assessing the impact of systematic biases in measured energy fluxes on CO2 flux estimates obtained from open-path eddy-covariance systems. In our analysis, we present equations to analyse the propagation of errors through the Webb, Pearman, and Leuning (WPL) algorithm [Quart. J. Roy. Meteorol. Soc. 106, 85­100, 1980] that is widely used to...

  16. Risk factor assessment of endoscopically removed malignant colorectal polyps.

    PubMed

    Netzer, P; Forster, C; Biral, R; Ruchti, C; Neuweiler, J; Stauffer, E; Schönegg, R; Maurer, C; Hüsler, J; Halter, F; Schmassmann, A

    1998-11-01

    Malignant colorectal polyps are defined as endoscopically removed polyps with cancerous tissue which has invaded the submucosa. Various histological criteria exist for managing these patients. To determine the significance of histological findings of patients with malignant polyps. Five pathologists reviewed the specimens of 85 patients initially diagnosed with malignant polyps. High risk malignant polyps were defined as having one of the following: incomplete polypectomy, a margin not clearly cancer-free, lymphatic or venous invasion, or grade III carcinoma. Adverse outcome was defined as residual cancer in a resection specimen and local or metastatic recurrence in the follow up period (mean 67 months). Malignant polyps were confirmed in 70 cases. In the 32 low risk malignant polyps, no adverse outcomes occurred; 16 (42%) of the 38 patients with high risk polyps had adverse outcomes (p<0.001). Independent adverse risk factors were incomplete polypectomy and a resected margin not clearly cancer-free; all other risk factors were only associated with adverse outcome when in combination. As no patients with low risk malignant polyps had adverse outcomes, polypectomy alone seems sufficient for these cases. In the high risk group, surgery is recommended when either of the two independent risk factors, incomplete polypectomy or a resection margin not clearly cancer-free, is present or if there is a combination of other risk factors. As lymphatic or venous invasion or grade III cancer did not have an adverse outcome when the sole risk factor, operations in such cases should be individually assessed on the basis of surgical risk.

  17. Bifactor Models Show a Superior Model Fit: Examination of the Factorial Validity of Parent-Reported and Self-Reported Symptoms of Attention-Deficit/Hyperactivity Disorders in Children and Adolescents.

    PubMed

    Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred

    2016-01-01

    Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.

  18. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  19. Pushing Economies (and Students) outside the Factor Price Equalization Zone

    ERIC Educational Resources Information Center

    Oslington, Paul; Towers, Isaac

    2009-01-01

    Despite overwhelming empirical evidence of the failure of factor price equalization, most teaching of international trade theory (even at the graduate level) assumes that economies are incompletely specialized and that factor price equalization holds. The behavior of trading economies in the absence of factor price equalization is not well…

  20. Factors Related to Incomplete Treatment of Breast Cancer in Kumasi, Ghana

    PubMed Central

    Obrist, Mark; Osei-Bonsu, Ernest; Ahwah, Baffour; Watanabe-Galloway, Shinobu; Merajver, Sofia D.; Schmid, Kendra; Soliman, Amr S.

    2014-01-01

    Purpose The burden of cancer in Africa is an enlarging public health challenge. Breast cancer in Ghana is the second most common cancer among Ghanaian women and the proportion of diagnosed patients who complete prescribed treatment is estimated to be very limited, thereby potentially adding to lower survival and poor quality of life after diagnosis. The objective of this study was to identify the patient and system factors related to incomplete treatment of breast cancer among patients. Methods This study was conducted at the Komfo Anokye Teaching Hospital in Kumasi, Ghana. We interviewed 117 breast cancer patients and next of kin of breast cancer patients diagnosed from 2008 to 2010. Results Islamic religion, seeking treatment with traditional healers, and lack of awareness about national health insurance coverage of breast cancer treatment were predictors of incomplete treatment. Conclusions The results of this study support that Ghanaian women with diagnosed breast cancer have multiple addressable and modifiable patient factors that may deter them from completing the prescribed treatment. The results highlight the need for developing and testing specific interventions about the importance of completing treatment with a special focus on addressing religious, cultural, and system navigation barriers in developing countries. PMID:25282667

  1. Risk factors for massive postpartum bleeding in pregnancies in which incomplete placenta previa are located on the posterior uterine wall

    PubMed Central

    Lee, Hyun Jung; Lee, Young Jai; Ahn, Eun Hee; Kim, Hyeon Chul; Jung, Sang Hee; Chang, Sung Woon

    2017-01-01

    Objective To identify factors associated with massive postpartum bleeding in pregnancies complicated by incomplete placenta previa located on the posterior uterine wall. Methods A retrospective case-control study was performed. We identified 210 healthy singleton pregnancies with incomplete placenta previa located on the posterior uterine wall, who underwent elective or emergency cesarean section after 24 weeks of gestation between January 2006 and April 2016. The cases with intraoperative blood loss (≥2,000 mL) or transfusion of packed red blood cells (≥4) or uterine artery embolization or hysterectomy were defined as massive bleeding. Results Twenty-three women experienced postpartum profuse bleeding (11.0%). After multivariable analysis, 4 variables were associated with massive postpartum hemorrhage (PPH): experience of 2 or more prior uterine curettage (adjusted odds ratio [aOR], 4.47; 95% confidence interval [CI], 1.29 to 15.48; P=0.018), short cervical length before delivery (<2.0 cm) (aOR, 7.13; 95% CI, 1.01 to 50.25; P=0.049), fetal non-cephalic presentation (aOR, 12.48; 95% CI, 1.29 to 121.24; P=0.030), and uteroplacental hypervascularity (aOR, 6.23; 95% CI, 2.30 to 8.83; P=0.001). Conclusion This is the first study of cases with incomplete placenta previa located on the posterior uterine wall, which were complicated by massive PPH. Our findings might be helpful to guide obstetric management and provide useful information for prediction of massive PPH in pregnancies with incomplete placenta previa located on the posterior uterine wall. PMID:29184859

  2. Risk factors for massive postpartum bleeding in pregnancies in which incomplete placenta previa are located on the posterior uterine wall.

    PubMed

    Lee, Hyun Jung; Lee, Young Jai; Ahn, Eun Hee; Kim, Hyeon Chul; Jung, Sang Hee; Chang, Sung Woon; Lee, Ji Yeon

    2017-11-01

    To identify factors associated with massive postpartum bleeding in pregnancies complicated by incomplete placenta previa located on the posterior uterine wall. A retrospective case-control study was performed. We identified 210 healthy singleton pregnancies with incomplete placenta previa located on the posterior uterine wall, who underwent elective or emergency cesarean section after 24 weeks of gestation between January 2006 and April 2016. The cases with intraoperative blood loss (≥2,000 mL) or transfusion of packed red blood cells (≥4) or uterine artery embolization or hysterectomy were defined as massive bleeding. Twenty-three women experienced postpartum profuse bleeding (11.0%). After multivariable analysis, 4 variables were associated with massive postpartum hemorrhage (PPH): experience of 2 or more prior uterine curettage (adjusted odds ratio [aOR], 4.47; 95% confidence interval [CI], 1.29 to 15.48; P =0.018), short cervical length before delivery (<2.0 cm) (aOR, 7.13; 95% CI, 1.01 to 50.25; P =0.049), fetal non-cephalic presentation (aOR, 12.48; 95% CI, 1.29 to 121.24; P =0.030), and uteroplacental hypervascularity (aOR, 6.23; 95% CI, 2.30 to 8.83; P =0.001). This is the first study of cases with incomplete placenta previa located on the posterior uterine wall, which were complicated by massive PPH. Our findings might be helpful to guide obstetric management and provide useful information for prediction of massive PPH in pregnancies with incomplete placenta previa located on the posterior uterine wall.

  3. Two variants of minimum discarded fill ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai

    1991-01-01

    It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less

  4. An Estimate and Score Algorithm for Simultaneous Parameter Estimation and Reconstruction of Incomplete Data on Social Networks

    DTIC Science & Technology

    2013-01-12

    www.security-informatics.com/content/2/1/1 References 1. SM Radilm, C Flint, GE Tita , Spatializing Social Networks: Using Social Network Analysis to...http://www.tandfonline.com/doi/ abs/10.1080/00045600903550428 2. G Tita , S Radil, Spatializing the social networks of gangs to explore patterns of...violence. Journal of Quantitative Criminology. 27, 1–25 (2011) 3. G Tita , JK Riley, G Ridgeway, AF Abrahamse, P Greenwood, Reducing Gun Violence

  5. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    NASA Astrophysics Data System (ADS)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  6. Modeling biological problems in computer science: a case study in genome assembly.

    PubMed

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  8. A Firefly Algorithm-based Approach for Pseudo-Relevance Feedback: Application to Medical Database.

    PubMed

    Khennak, Ilyes; Drias, Habiba

    2016-11-01

    The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.

  9. Automated peroperative assessment of stents apposition from OCT pullbacks.

    PubMed

    Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent

    2015-04-01

    This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A hierarchical graph neuron scheme for real-time pattern recognition.

    PubMed

    Nasution, B B; Khan, A I

    2008-02-01

    The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.

  11. Preconditioned conjugate gradient methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1994-01-01

    A preconditioned Krylov subspace method (GMRES) is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux-split formulation. Several preconditioning techniques are investigated to enhance the efficiency and convergence rate of the implicit solver based on the GMRES algorithm. The superiority of the new solver is established by comparisons with a conventional implicit solver, namely line Gauss-Seidel relaxation (LGSR). Computational test results for low-speed (incompressible flow over a backward-facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade), and hypersonic flow (shock-on-shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the Mach 0.1 case, overall speedup factors of up to 17 (in terms of time-steps) and 15 (in terms of CPU time on a CRAY-YMP/8) are found in favor of the preconditioned GMRES solver, when compared with the LGSR solver. The corresponding speedup factors for the transonic flow case are 17 and 23, respectively. The hypersonic flow case shows slightly lower speedup factors of 9 and 13, respectively. The study of preconditioners conducted in this research reveals that a new LUSGS-type preconditioner is much more efficient than a conventional incomplete LU-type preconditioner.

  12. The physiological kinetics of nitrogen and the prevention of decompression sickness.

    PubMed

    Doolette, D J; Mitchell, S J

    2001-01-01

    Decompression sickness (DCS) is a potentially crippling disease caused by intracorporeal bubble formation during or after decompression from a compressed gas underwater dive. Bubbles most commonly evolve from dissolved inert gas accumulated during the exposure to increased ambient pressure. Most diving is performed breathing air, and the inert gas of interest is nitrogen. Divers use algorithms based on nitrogen kinetic models to plan the duration and degree of exposure to increased ambient pressure and to control their ascent rate. However, even correct execution of dives planned using such algorithms often results in bubble formation and may result in DCS. This reflects the importance of idiosyncratic host factors that are difficult to model, and deficiencies in current nitrogen kinetic models. Models describing the exchange of nitrogen between tissues and blood may be based on distributed capillary units or lumped compartments, either of which may be perfusion- or diffusion-limited. However, such simplistic models are usually poor predictors of experimental nitrogen kinetics at the organ or tissue level, probably because they fail to account for factors such as heterogeneity in both tissue composition and blood perfusion and non-capillary exchange mechanisms. The modelling of safe decompression procedures is further complicated by incomplete understanding of the processes that determine bubble formation. Moreover, any formation of bubbles during decompression alters subsequent nitrogen kinetics. Although these factors mandate complex resolutions to account for the interaction between dissolved nitrogen kinetics and bubble formation and growth, most decompression schedules are based on relatively simple perfusion-limited lumped compartment models of blood: tissue nitrogen exchange. Not surprisingly, all models inevitably require empirical adjustment based on outcomes in the field. Improvements in the predictive power of decompression calculations are being achieved using probabilistic bubble models, but divers will always be subject to the possibility of developing DCS despite adherence to prescribed limits.

  13. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    PubMed

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. A Factor Analysis of Learning Data and Selected Ability Test Scores

    ERIC Educational Resources Information Center

    Jones, Dorothy L.

    1976-01-01

    A verbal concept-learning task permitting the externalizing and quantifying of learning behavior and 16 ability tests were administered to female graduate students. Data were analyzed by alpha factor analysis and incomplete image analysis. Six alpha factors and 12 image factors were extracted and orthogonally rotated. Four areas of cognitive…

  15. When Machines Think: Radiology's Next Frontier.

    PubMed

    Dreyer, Keith J; Geis, J Raymond

    2017-12-01

    Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.

  16. The Devil Is in the Details: Incomplete Reporting in Preclinical Animal Research.

    PubMed

    Avey, Marc T; Moher, David; Sullivan, Katrina J; Fergusson, Dean; Griffin, Gilly; Grimshaw, Jeremy M; Hutton, Brian; Lalu, Manoj M; Macleod, Malcolm; Marshall, John; Mei, Shirley H J; Rudnicki, Michael; Stewart, Duncan J; Turgeon, Alexis F; McIntyre, Lauralyn

    2016-01-01

    Incomplete reporting of study methods and results has become a focal point for failures in the reproducibility and translation of findings from preclinical research. Here we demonstrate that incomplete reporting of preclinical research is not limited to a few elements of research design, but rather is a broader problem that extends to the reporting of the methods and results. We evaluated 47 preclinical research studies from a systematic review of acute lung injury that use mesenchymal stem cells (MSCs) as a treatment. We operationalized the ARRIVE (Animal Research: Reporting of In Vivo Experiments) reporting guidelines for pre-clinical studies into 109 discrete reporting sub-items and extracted 5,123 data elements. Overall, studies reported less than half (47%) of all sub-items (median 51 items; range 37-64). Across all studies, the Methods Section reported less than half (45%) and the Results Section reported less than a third (29%). There was no association between journal impact factor and completeness of reporting, which suggests that incomplete reporting of preclinical research occurs across all journals regardless of their perceived prestige. Incomplete reporting of methods and results will impede attempts to replicate research findings and maximize the value of preclinical studies.

  17. Relationship between isoseismal area and magnitude of historical earthquakes in Greece by a hybrid fuzzy neural network method

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Sokos, E.

    2012-01-01

    In this paper we suggest the use of diffusion-neural-networks, (neural networks with intrinsic fuzzy logic abilities) to assess the relationship between isoseismal area and earthquake magnitude for the region of Greece. It is of particular importance to study historical earthquakes for which we often have macroseismic information in the form of isoseisms but it is statistically incomplete to assess magnitudes from an isoseismal area or to train conventional artificial neural networks for magnitude estimation. Fuzzy relationships are developed and used to train a feed forward neural network with a back propagation algorithm to obtain the final relationships. Seismic intensity data from 24 earthquakes in Greece have been used. Special attention is being paid to the incompleteness and contradictory patterns in scanty historical earthquake records. The results show that the proposed processing model is very effective, better than applying classical artificial neural networks since the magnitude macroseismic intensity target function has a strong nonlinearity and in most cases the macroseismic datasets are very small.

  18. Optimal (R, Q) policy and pricing for two-echelon supply chain with lead time and retailer's service-level incomplete information

    NASA Astrophysics Data System (ADS)

    Esmaeili, M.; Naghavi, M. S.; Ghahghaei, A.

    2018-03-01

    Many studies focus on inventory systems to analyze different real-world situations. This paper considers a two-echelon supply chain that includes one warehouse and one retailer with stochastic demand and an up-to-level policy. The retailer's lead time includes the transportation time from the warehouse to the retailer that is unknown to the retailer. On the other hand, the warehouse is unaware of retailer's service level. The relationship between the retailer and the warehouse is modeled based on the Stackelberg game with incomplete information. Moreover, their relationship is presented when the warehouse and the retailer reveal their private information using the incentive strategies. The optimal inventory and pricing policies are obtained using an algorithm based on bi-level programming. Numerical examples, including sensitivity analysis of some key parameters, will compare the results between the Stackelberg models. The results show that information sharing is more beneficial to the warehouse rather than the retailer.

  19. Understanding the Milky Way Halo through Large Surveys

    NASA Astrophysics Data System (ADS)

    Koposov, Sergey

    This thesis presents an extensive study of stellar substructure in the outskirts of the Milky Way(MW), combining data mining of SDSS with theoretical modeling. Such substructure, either bound star clusters and satellite galaxies, or tidally disrupted objects forming stellar streams are powerful diagnostics of the Milky Way's dynamics and formation history. I have developed an algorithmic technique of searching for stellar overdensities in the MW halo, based on SDSS catalogs. This led to the discovery of unusual ultra-faint ~ (1000Lsun) globular clusters with very compact sizes and relaxation times << t_Hubble. The detailed analysis of a known stellar stream (GD-1), allowed me to make the first 6-D phase space map for such an object along 60 degrees on the sky. By modeling the stream's orbit I could place strong constraints on the Galactic potential, e.g. Vcirc(R0)= 224+/-13 km/s. The application of the algorithmic search for stellar overdensities to the SDSS dataset and to mock datasets allowed me to quantify SDSS's severe radial incompleteness in its search for ultra-faint dwarf galaxies and to determine the luminosity function of MW satellites down to luminosities of M_V ~ -3. I used the semi-analytical model in order to compare the CDM model predictions for the MW satellite population with the observations; this comparison has shown that the recently increased census of MW satellites, better understanding of the radial incompleteness and the suppression of star formation after the reionization can fully solve the "Missing satellite problem".

  20. Expression of insulin-like growth factor-1 and proliferating cell nuclear antigen in human pulp cells of teeth with complete and incomplete root development.

    PubMed

    Caviedes-Bucheli, J; Canales-Sánchez, P; Castrillón-Sarria, N; Jovel-Garcia, J; Alvarez-Vásquez, J; Rivero, C; Azuero-Holguín, M M; Diaz, E; Munoz, H R

    2009-08-01

    To quantify the expression of insulin-like growth factor-1 (IGF-1) and proliferating cell nuclear antigen (PCNA) in human pulp cells of teeth with complete or incomplete root development, to support the specific role of IGF-1 in cell proliferation during tooth development and pulp reparative processes. Twenty six pulp samples were obtained from freshly extracted human third molars, equally divided in two groups according to root development stage (complete or incomplete root development). All samples were processed and immunostained to determine the expression of IGF-1 and PCNA in pulp cells. Sections were observed with a light microscope at 80x and morphometric analyses were performed to calculate the area of PCNA and IGF-1 immunostaining using digital image software. Mann-Whitney's test was used to determine statistically significant differences between groups (P < 0.05) for each peptide and the co-expression of both. Expression of IGF-1 and PCNA was observed in all human pulp samples with a statistically significant higher expression in cells of pulps having complete root development (P = 0.0009). Insulin-like growth factor-1 and PCNA are expressed in human pulp cells, with a significant greater expression in pulp cells of teeth having complete root development.

  1. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data.

    PubMed

    Pesesky, Mitchell W; Hussain, Tahir; Wallace, Meghan; Patel, Sanket; Andleeb, Saadia; Burnham, Carey-Ann D; Dantas, Gautam

    2016-01-01

    The time-to-result for culture-based microorganism recovery and phenotypic antimicrobial susceptibility testing necessitates initial use of empiric (frequently broad-spectrum) antimicrobial therapy. If the empiric therapy is not optimal, this can lead to adverse patient outcomes and contribute to increasing antibiotic resistance in pathogens. New, more rapid technologies are emerging to meet this need. Many of these are based on identifying resistance genes, rather than directly assaying resistance phenotypes, and thus require interpretation to translate the genotype into treatment recommendations. These interpretations, like other parts of clinical diagnostic workflows, are likely to be increasingly automated in the future. We set out to evaluate the two major approaches that could be amenable to automation pipelines: rules-based methods and machine learning methods. The rules-based algorithm makes predictions based upon current, curated knowledge of Enterobacteriaceae resistance genes. The machine-learning algorithm predicts resistance and susceptibility based on a model built from a training set of variably resistant isolates. As our test set, we used whole genome sequence data from 78 clinical Enterobacteriaceae isolates, previously identified to represent a variety of phenotypes, from fully-susceptible to pan-resistant strains for the antibiotics tested. We tested three antibiotic resistance determinant databases for their utility in identifying the complete resistome for each isolate. The predictions of the rules-based and machine learning algorithms for these isolates were compared to results of phenotype-based diagnostics. The rules based and machine-learning predictions achieved agreement with standard-of-care phenotypic diagnostics of 89.0 and 90.3%, respectively, across twelve antibiotic agents from six major antibiotic classes. Several sources of disagreement between the algorithms were identified. Novel variants of known resistance factors and incomplete genome assembly confounded the rules-based algorithm, resulting in predictions based on gene family, rather than on knowledge of the specific variant found. Low-frequency resistance caused errors in the machine-learning algorithm because those genes were not seen or seen infrequently in the test set. We also identified an example of variability in the phenotype-based results that led to disagreement with both genotype-based methods. Genotype-based antimicrobial susceptibility testing shows great promise as a diagnostic tool, and we outline specific research goals to further refine this methodology.

  2. High-Intensity Locomotor Exercise Increases Brain-Derived Neurotrophic Factor in Individuals with Incomplete Spinal Cord Injury.

    PubMed

    Leech, Kristan A; Hornby, T George

    2017-03-15

    High-intensity locomotor exercise is suggested to contribute to improved recovery of locomotor function after neurological injury. This may be secondary to exercise-intensity-dependent increases in neurotrophin expression demonstrated previously in control subjects. However, rigorous examination of intensity-dependent changes in neurotrophin levels is lacking in individuals with motor incomplete spinal cord injury (SCI). Therefore, the primary aim of this study was to evaluate the effect of locomotor exercise intensity on peripheral levels of brain-derived neurotrophic factor (BDNF) in individuals with incomplete SCI. We also explored the impact of the Val66Met single-nucleotide polymorphism (SNP) on the BDNF gene on intensity-dependent changes. Serum concentrations of BDNF and insulin-like growth factor-1 (IGF-1), as well as measures of cardiorespiratory dynamics, were evaluated across different levels of exercise intensity achieved during a graded-intensity, locomotor exercise paradigm in 11 individuals with incomplete SCI. Our results demonstrate a significant increase in serum BDNF at high, as compared to moderate, exercise intensities (p = 0.01) and 15 and 30 min post-exercise (p < 0.01 for both), with comparison to changes at low intensity approaching significance (p = 0.05). Serum IGF-1 demonstrated no intensity-dependent changes. Significant correlations were observed between changes in BDNF and specific indicators of exercise intensity (e.g., rating of perceived exertion; R = 0.43; p = 0.02). Additionally, the data suggest that Val66Met SNP carriers may not exhibit intensity-dependent changes in serum BDNF concentration. Given the known role of BDNF in experience-dependent neuroplasticity, these preliminary results suggest that exercise intensity modulates serum BDNF concentrations and may be an important parameter of physical rehabilitation interventions after neurological injury.

  3. High-Intensity Locomotor Exercise Increases Brain-Derived Neurotrophic Factor in Individuals with Incomplete Spinal Cord Injury

    PubMed Central

    Leech, Kristan A.

    2017-01-01

    Abstract High-intensity locomotor exercise is suggested to contribute to improved recovery of locomotor function after neurological injury. This may be secondary to exercise-intensity–dependent increases in neurotrophin expression demonstrated previously in control subjects. However, rigorous examination of intensity-dependent changes in neurotrophin levels is lacking in individuals with motor incomplete spinal cord injury (SCI). Therefore, the primary aim of this study was to evaluate the effect of locomotor exercise intensity on peripheral levels of brain-derived neurotrophic factor (BDNF) in individuals with incomplete SCI. We also explored the impact of the Val66Met single-nucleotide polymorphism (SNP) on the BDNF gene on intensity-dependent changes. Serum concentrations of BDNF and insulin-like growth factor-1 (IGF-1), as well as measures of cardiorespiratory dynamics, were evaluated across different levels of exercise intensity achieved during a graded-intensity, locomotor exercise paradigm in 11 individuals with incomplete SCI. Our results demonstrate a significant increase in serum BDNF at high, as compared to moderate, exercise intensities (p = 0.01) and 15 and 30 min post-exercise (p < 0.01 for both), with comparison to changes at low intensity approaching significance (p = 0.05). Serum IGF-1 demonstrated no intensity-dependent changes. Significant correlations were observed between changes in BDNF and specific indicators of exercise intensity (e.g., rating of perceived exertion; R = 0.43; p = 0.02). Additionally, the data suggest that Val66Met SNP carriers may not exhibit intensity-dependent changes in serum BDNF concentration. Given the known role of BDNF in experience-dependent neuroplasticity, these preliminary results suggest that exercise intensity modulates serum BDNF concentrations and may be an important parameter of physical rehabilitation interventions after neurological injury. PMID:27526567

  4. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  5. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    PubMed

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  6. Influential Factors in Incomplete Acquisition and Attrition of Young Heritage Speakers' Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Gharibi, Khadijeh; Boers, Frank

    2017-01-01

    This study investigates whether young heritage speakers, either simultaneous or sequential bilinguals, have limited vocabulary knowledge in their family language compared to matched monolingual counterparts and, if so, what factors help to account for this difference. These factors include age, age at emigration, length of emigration, frequency of…

  7. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  8. Risk factor assessment of endoscopically removed malignant colorectal polyps

    PubMed Central

    Netzer, P; Forster, C; Biral, R; Ruchti, C; Neuweiler, J; Stauffer, E; Schonegg, R; Maurer, C; Husler, J; Halter, F; Schmassmann, A

    1998-01-01

    Background—Malignant colorectal polyps are defined as endoscopically removed polyps with cancerous tissue which has invaded the submucosa. Various histological criteria exist for managing these patients. 
Aims—To determine the significance of histological findings of patients with malignant polyps. 
Methods—Five pathologists reviewed the specimens of 85 patients initially diagnosed with malignant polyps. High risk malignant polyps were defined as having one of the following: incomplete polypectomy, a margin not clearly cancer-free, lymphatic or venous invasion, or grade III carcinoma. Adverse outcome was defined as residual cancer in a resection specimen and local or metastatic recurrence in the follow up period (mean 67months). 
Results—Malignant polyps were confirmed in 70 cases. In the 32 low risk malignant polyps, no adverse outcomes occurred; 16(42%) of the 38 patients with high risk polyps had adverse outcomes (p<0.001). Independent adverse risk factors were incomplete polypectomy and a resected margin not clearly cancer-free; all other risk factors were only associated with adverse outcome when in combination. 
Conclusion—As no patients with low risk malignant polyps had adverse outcomes, polypectomy alone seems sufficient for these cases. In the high risk group, surgery is recommended when either of the two independent risk factors, incomplete polypectomy or a resection margin not clearly cancer-free, is present or if there is a combination of other risk factors. As lymphatic or venous invasion or grade III cancer did not have an adverse outcome when the sole risk factor, operations in such cases should be individually assessed on the basis of surgical risk. 

 Keywords: malignant polyps; colon cancer; colonoscopy; polypectomy; histology PMID:9824349

  9. Electronic versus traditional chest tube drainage following lobectomy: a randomized trial.

    PubMed

    Lijkendijk, Marike; Licht, Peter B; Neckelmann, Kirsten

    2015-12-01

    Electronic drainage systems have shown superiority compared with traditional (water seal) drainage systems following lung resections, but the number of studies is limited. As part of a medico-technical evaluation, before change of practice to electronic drainage systems for routine thoracic surgery, we conducted a randomized controlled trial (RCT) investigating chest tube duration and length of hospitalization. Patients undergoing lobectomy were included in a prospective open label RCT. A strict algorithm was designed for early chest tube removal, and this decision was delegated to staff nurses. Data were analysed by Cox proportional hazard regression model adjusting for lung function, gender, age, BMI, video-assisted thoracic surgery (VATS) or open surgery and presence of incomplete fissure or pleural adhesions. Time was distinguished as possible (optimal) and actual time for chest tube removal, as well as length of hospitalization. A total of 105 patients were randomized. We found no significant difference between the electronic group and traditional group in optimal chest tube duration (HR = 0.83; 95% CI: 0.55-1.25; P = 0.367), actual chest tube duration (HR = 0.84; 95% CI: 0.55-1.26; P = 0.397) or length of hospital stay (HR = 0.91; 95% CI: 0.59-1.39; P = 0.651). No chest tubes had to be reinserted. Presence of pleural adhesions or an incomplete fissure was a significant predictor of chest tube duration (HR = 1.72; 95% CI: 1.15-2.77; P = 0.014). Electronic drainage systems did not reduce chest tube duration or length of hospitalization significantly compared with traditional water seal drainage when a strict algorithm for chest tube removal was used. This algorithm allowed delegation of chest tube removal to staff nurses, and in some patients chest tubes could be removed safely on the day of surgery. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  10. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    NASA Astrophysics Data System (ADS)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low seismic potential provinces. Seismotectonic properties of very high seismic potential provinces have been also presented.

  11. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  12. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  13. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  14. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  15. Incorrect support and missing center tolerances of phasing algorithms

    DOE PAGES

    Huang, Xiaojing; Nelson, Johanna; Steinbrener, Jan; ...

    2010-01-01

    In x-ray diffraction microscopy, iterative algorithms retrieve reciprocal space phase information, and a real space image, from an object's coherent diffraction intensities through the use of a priori information such as a finite support constraint. In many experiments, the object's shape or support is not well known, and the diffraction pattern is incompletely measured. We describe here computer simulations to look at the effects of both of these possible errors when using several common reconstruction algorithms. Overly tight object supports prevent successful convergence; however, we show that this can often be recognized through pathological behavior of the phase retrieval transfermore » function. Dynamic range limitations often make it difficult to record the central speckles of the diffraction pattern. We show that this leads to increasing artifacts in the image when the number of missing central speckles exceeds about 10, and that the removal of unconstrained modes from the reconstructed image is helpful only when the number of missing central speckles is less than about 50. In conclusion, this simulation study helps in judging the reconstructability of experimentally recorded coherent diffraction patterns.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presentsmore » algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.« less

  17. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  18. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  19. A robust multilevel simultaneous eigenvalue solver

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  20. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  1. Data mining in soft computing framework: a survey.

    PubMed

    Mitra, S; Pal, S K; Mitra, P

    2002-01-01

    The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included.

  2. SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations

    PubMed Central

    2013-01-01

    Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408

  3. SUNPLIN: simulation with uncertainty for phylogenetic investigations.

    PubMed

    Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F

    2013-11-15

    Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.

  4. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  5. Analysis of nasopharyngeal carcinoma risk factors with Bayesian networks.

    PubMed

    Aussem, Alex; de Morais, Sérgio Rodrigues; Corbex, Marilys

    2012-01-01

    We propose a new graphical framework for extracting the relevant dietary, social and environmental risk factors that are associated with an increased risk of nasopharyngeal carcinoma (NPC) on a case-control epidemiologic study that consists of 1289 subjects and 150 risk factors. This framework builds on the use of Bayesian networks (BNs) for representing statistical dependencies between the random variables. We discuss a novel constraint-based procedure, called Hybrid Parents and Children (HPC), that builds recursively a local graph that includes all the relevant features statistically associated to the NPC, without having to find the whole BN first. The local graph is afterwards directed by the domain expert according to his knowledge. It provides a statistical profile of the recruited population, and meanwhile helps identify the risk factors associated to NPC. Extensive experiments on synthetic data sampled from known BNs show that the HPC outperforms state-of-the-art algorithms that appeared in the recent literature. From a biological perspective, the present study confirms that chemical products, pesticides and domestic fume intake from incomplete combustion of coal and wood are significantly associated with NPC risk. These results suggest that industrial workers are often exposed to noxious chemicals and poisonous substances that are used in the course of manufacturing. This study also supports previous findings that the consumption of a number of preserved food items, like house made proteins and sheep fat, are a major risk factor for NPC. BNs are valuable data mining tools for the analysis of epidemiologic data. They can explicitly combine both expert knowledge from the field and information inferred from the data. These techniques therefore merit consideration as valuable alternatives to traditional multivariate regression techniques in epidemiologic studies. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  7. Re-sprains during the first 3 months after initial ankle sprain are related to incomplete recovery: an observational study.

    PubMed

    van Middelkoop, Marienke; van Rijn, Rogier M; Verhaar, Jan A N; Koes, Bart W; Bierma-Zeinstra, Sita M A

    2012-01-01

    What are prognostic factors for incomplete recovery, instability, re-sprains and pain intensity 12 months after patients consult primary care practitioners for acute ankle sprains? Observational study. One hundred and two patients who consulted their general practitioner or an emergency department for an acute ankle sprain were included in the study. Possible prognostic factors were assessed at baseline and at 3 months follow-up. Outcome measures assessed at 12 months follow-up were self-reported recovery, instability, re-sprains and pain intensity. At 3 months follow-up, 65% of the participants reported instability and 24% reported one or more re-sprains. At 12 months follow-up, 55% still reported instability and more than 50% regarded themselves not completely recovered. None of the factors measured at baseline could predict the outcome at 12 months follow-up. Additionally, prognostic factors from the physical examination of the non-recovered participants at 3 months could not be identified. However, among the non-recovered participants at 3 months follow-up, re-sprains and self-reported pain at rest at 3 months were related to incomplete recovery at 12 months. A physical examination at 3 months follow-up for the non-recovered ankle sprain patient seems to have no additional value for predicting outcome at 12 months. However, for the non-recovered patients at 3 months follow-up, self-reported pain at rest and re-sprains during the first 3 months of follow-up seem to have a prognostic value for recovery at 12 months. Copyright © 2012 Australian Physiotherapy Association. Published by .. All rights reserved.

  8. Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data

    PubMed Central

    Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping

    2013-01-01

    Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272

  9. Bi-level multi-source learning for heterogeneous block-wise missing data.

    PubMed

    Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping

    2014-11-15

    Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.

  10. SU-F-BRCD-09: Total Variation (TV) Based Fast Convergent Iterative CBCT Reconstruction with GPU Acceleration.

    PubMed

    Xu, Q; Yang, D; Tan, J; Anastasio, M

    2012-06-01

    To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.

  11. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  12. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  13. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  14. Quantum factorization of 143 on a dipolar-coupling nuclear magnetic resonance system.

    PubMed

    Xu, Nanyang; Zhu, Jing; Lu, Dawei; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng

    2012-03-30

    Quantum algorithms could be much faster than classical ones in solving the factoring problem. Adiabatic quantum computation for this is an alternative approach other than Shor's algorithm. Here we report an improved adiabatic factoring algorithm and its experimental realization to factor the number 143 on a liquid-crystal NMR quantum processor with dipole-dipole couplings. We believe this to be the largest number factored in quantum-computation realizations, which shows the practical importance of adiabatic quantum algorithms.

  15. A first attempt at few coils and low-coverage resistive wall mode stabilization of EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Olofsson, K. Erik J.; Brunsell, Per R.; Drake, James R.; Frassinetti, Lorenzo

    2012-09-01

    The reversed-field pinch features resistive-shell-type instabilities at any (vanishing and finite) plasma pressure. An attempt to stabilize the full spectrum of these modes using both (i) incomplete coverage and (ii) few coils is presented. Two empirically derived model-based control algorithms are compared with a baseline guaranteed suboptimal intelligent-shell-type (IS) feedback. Experimental stabilization could not be achieved for the coil array subset sizes considered by this first study. But the model-based controllers appear to significantly outperform the decentralized IS method.

  16. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  17. Internal Structure of Mini-CEX Scores for Internal Medicine Residents: Factor Analysis and Generalizability

    ERIC Educational Resources Information Center

    Cook, David A.; Beckman, Thomas J.; Mandrekar, Jayawant N.; Pankratz, V. Shane

    2010-01-01

    The mini-CEX is widely used to rate directly observed resident-patient encounters. Although several studies have explored the reliability of mini-CEX scores, the dimensionality of mini-CEX scores is incompletely understood. Objective: Explore the dimensionality of mini-CEX scores through factor analysis and generalizability analysis. Design:…

  18. An Examination of Factors and Attitudes that Influence Reporting Fraudulent Claims in an Academic Environment

    ERIC Educational Resources Information Center

    Carmichael, Anna M.; Krueger, Lacy E.

    2014-01-01

    The study examined potential factors and attitudes associated with providing fraudulent academic claims. A total of 319 students completed an online survey which involved reading a vignette about an incomplete assignment. Participants reported whether they would contact their instructor to gain an extension, expressed their confidence in the…

  19. Predictors of incompletion of immunization among children residing in the slums of Kathmandu valley, Nepal: a case-control study.

    PubMed

    Shrestha, Sumina; Shrestha, Monika; Wagle, Rajendra Raj; Bhandari, Gita

    2016-09-13

    Immunization is one of the most effective health interventions averting an estimated 2-3 million deaths every year. In Nepal, as in most low-income countries, infants are immunized with standard WHO recommended vaccines. However, 16.4 % of children did not receive complete immunization by 12 months of age in Nepal in 2011. Studies from different parts of the world showed that incomplete immunization is even higher in slums. The objective of this study was to identify the predictors of incompletion of immunization among children aged 12-23 months living in the slums of Kathmandu Valley, Nepal. The unmatched case-control study was conducted in 22 randomly selected slums of Kathmandu Valley. The sampling frame was first identified by complete enumeration of entire households of the study area from which 59 incompletely immunized children as cases and 177 completely immunized children as controls were chosen randomly in 1:3 ratio. Data were collected from the primary caretakers of the children. Backward logistic regression with 95 % confidence interval and adjusted odds ratio (AOR) were applied to assess the factors independently associated with incomplete immunization. Twenty-six percent of the children were incompletely vaccinated. The coverage of BCG vaccine was 95.0 % while it was 80.5 % for measles vaccine. The significant predictors of incomplete immunization were the home delivery of a child, the family residing on rent, a primary caretaker with poor knowledge about the schedule of vaccination and negative perception towards vaccinating a sick child, conflicting priorities, and development of abscess following immunization. Reduction of abscess formation rate can be a potential way to improve immunization rates. Community health volunteers should increase their follow-up on children born at home and those living in rent. Health institutions and volunteers should be influential in creating awareness about immunization, its schedule, and post-vaccination side effects.

  20. Adolescent girls define menstruation: a multiethnic exploratory study.

    PubMed

    Orringer, Kelly; Gahagan, Sheila

    2010-09-01

    Incomplete understanding of menstruation may place girls at risk for sexually transmitted diseases (STDs) and unintended pregnancy. Prior research suggests that European American and African American girls incompletely understand menstruation, yet little is known about menstrual knowledge in other ethnic groups. Using audiotaped focus group and individual interviews with 73 African American, Mexican American, Arab American, and European American girls, we assessed girls' menstrual understanding. Responses included reproduction, growing up, cleansing, messages about femininity, and not knowing. We found ethnic differences in the prominence of these themes. We learned that social and cultural factors play an important role in transmission of menstrual knowledge.

  1. Segmentation of financial seals and its implementation on a DSP-based system

    NASA Astrophysics Data System (ADS)

    He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao

    2009-11-01

    Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.

  2. Platelet function analysis with two different doses of aspirin.

    PubMed

    Aydinalp, Alp; Atar, Ilyas; Altin, Cihan; Gülmez, Oykü; Atar, Asli; Açikel, Sadik; Bozbaş, Hüseyin; Yildirir, Aylin; Müderrisoğlu, Haldun

    2010-06-01

    We aimed to compare the level of platelet inhibition using the platelet function analyzer (PFA)-100 in patients receiving low and medium doses of aspirin. On a prospective basis, 159 cardiology outpatients (83 men, 76 women; mean age 60.9 ± 9.9 years) taking 100 mg/day or 300 mg/day aspirin at least for the previous 15 days were included. Of these, 79 patients (50%) were on 100 mg and 80 patients (50.3%) were on 300 mg aspirin treatment. Blood samples were collected between 09:30 and 11:00 hours in the morning. Platelet reactivity was measured with the PFA-100 system. Incomplete platelet inhibition was defined as a normal collagen/epinephrine closure time (< 165 sec) despite aspirin treatment. Baseline clinical and laboratory characteristics of the patient groups taking 100 mg or 300 mg aspirin were similar. The overall prevalence of incomplete platelet inhibition was 22% (35 patients). The prevalence of incomplete platelet inhibition was significantly higher in patients treated with 100 mg of aspirin (n = 24/79, 30.4%) compared with those treated with 300 mg of aspirin (n = 11/80, 13.8%) (p = 0.013). In univariate analysis, female sex (p = 0.002) and aspirin dose (p = 0.013) were significantly correlated with incomplete platelet inhibition. In multivariate analysis, female sex (OR: 0.99; 95% CI 0.9913-0.9994; p = 0.025) and aspirin dose (OR: 3.38; 95% CI 1.4774-7.7469; p = 0.003) were found as independent factors predictive of incomplete platelet inhibition. Our findings suggest that treatment with higher doses of aspirin can reduce incomplete platelet inhibition especially in female patients.

  3. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  4. Factoring symmetric indefinite matrices on high-performance architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    The Bunch-Kaufman algorithm is the method of choice for factoring symmetric indefinite matrices in many applications. However, the Bunch-Kaufman algorithm does not take advantage of high-performance architectures such as the Cray Y-MP. Three new algorithms, based on Bunch-Kaufman factorization, that take advantage of such architectures are described. Results from an implementation of the third algorithm are presented.

  5. Search for Expectancy-Inconsistent Information Reduces Uncertainty Better: The Role of Cognitive Capacity

    PubMed Central

    Strojny, Paweł; Kossowska, Małgorzata; Strojny, Agnieszka

    2016-01-01

    Motivation and cognitive capacity are key factors in people’s everyday struggle with uncertainty. However, the exact nature of their interplay in various contexts still needs to be revealed. The presented paper reports on two experimental studies which aimed to examine the joint consequences of motivational and cognitive factors for preferences regarding incomplete information expansion. In Study 1 we demonstrate the interactional effect of motivation and cognitive capacity on information preference. High need for closure resulted in a stronger relative preference for expectancy-inconsistent information among non-depleted individuals, but the opposite among cognitively depleted ones. This effect was explained by the different informative value of questions in comparison to affirmative sentences and the potential possibility of assimilation of new information if it contradicts prior knowledge. In Study 2 we further investigated the obtained effect, showing that not only questions but also other kinds of incomplete information are subject to the same dependency. Our results support the expectation that, in face of incomplete information, motivation toward closure may be fulfilled efficiently by focusing on expectancy-inconsistent pieces of data. We discuss the obtained effect in the context of previous assumptions that high need for closure results in a simple processing style, advocating a more complex approach based on the character of the provided information. PMID:27047422

  6. Cavitation of deep lacunar infarcts in patients with first-ever lacunar stroke: a 2-year follow-up study with MR.

    PubMed

    Loos, Caroline M J; Staals, Julie; Wardlaw, Joanna M; van Oostenbrugge, Robert J

    2012-08-01

    Studies in patients with lacunar stroke often assess the number of lacunes. However, data on how many symptomatic lacunar infarcts cavitate into a lacune are limited. We assessed the evolution of symptomatic lacunar infarcts over 2-year follow-up. In 82 patients with first-ever lacunar stroke with a lacunar infarct in the deep brain regions (excluding the centrum semiovale), we performed a brain MR at presentation and 2 years later. We classified cavitation of lacunar infarcts at baseline and on follow-up MR as absent, incomplete, or complete. We recorded time to imaging, infarct size, and vascular risk factors. On baseline MR, 38 (46%) index infarcts showed complete or incomplete cavitation. Median time to imaging was 8 (0-73) days in noncavitated and 63 (1-184) days in cavitated lesions (P<0.05). On follow-up imaging, 94% of the lacunar infarcts were completely or incompletely cavitated, most had reduced in diameter, and 5 (6%) had disappeared. Vascular risk factors were not associated with cavitation. Cavitation and lesion shrinkage were seen in almost all symptomatic lacunar infarcts in the deep brain regions over 2-year follow-up. Counting lacunes in these specific regions at a random moment might slightly, however not substantially, underestimate the burden of deep lacunar infarction.

  7. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  8. Sampling-based real-time motion planning under state uncertainty for autonomous micro-aerial vehicles in GPS-denied environments.

    PubMed

    Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan

    2014-11-18

    This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints.

  9. Sampling-Based Real-Time Motion Planning under State Uncertainty for Autonomous Micro-Aerial Vehicles in GPS-Denied Environments

    PubMed Central

    Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan

    2014-01-01

    This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints. PMID:25412217

  10. Computing the multifractal spectrum from time series: an algorithmic approach.

    PubMed

    Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E

    2009-12-01

    We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.

  11. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree.

    PubMed

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen-host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules.

  12. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree

    PubMed Central

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen–host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules. PMID:26649272

  13. BLGAN: Bayesian learning and genetic algorithm for supporting negotiation with incomplete information.

    PubMed

    Sim, Kwang Mong; Guo, Yuanyuan; Shi, Benyun

    2009-02-01

    Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt BL to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.

  14. An efficient link prediction index for complex military organization

    NASA Astrophysics Data System (ADS)

    Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing

    2017-03-01

    Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.

  15. NegGOA: negative GO annotations selection using ontology structure.

    PubMed

    Fu, Guangyuan; Wang, Jun; Yang, Bo; Yu, Guoxian

    2016-10-01

    Predicting the biological functions of proteins is one of the key challenges in the post-genomic era. Computational models have demonstrated the utility of applying machine learning methods to predict protein function. Most prediction methods explicitly require a set of negative examples-proteins that are known not carrying out a particular function. However, Gene Ontology (GO) almost always only provides the knowledge that proteins carry out a particular function, and functional annotations of proteins are incomplete. GO structurally organizes more than tens of thousands GO terms and a protein is annotated with several (or dozens) of these terms. For these reasons, the negative examples of a protein can greatly help distinguishing true positive examples of the protein from such a large candidate GO space. In this paper, we present a novel approach (called NegGOA) to select negative examples. Specifically, NegGOA takes advantage of the ontology structure, available annotations and potentiality of additional annotations of a protein to choose negative examples of the protein. We compare NegGOA with other negative examples selection algorithms and find that NegGOA produces much fewer false negatives than them. We incorporate the selected negative examples into an efficient function prediction model to predict the functions of proteins in Yeast, Human, Mouse and Fly. NegGOA also demonstrates improved accuracy than these comparing algorithms across various evaluation metrics. In addition, NegGOA is less suffered from incomplete annotations of proteins than these comparing methods. The Matlab and R codes are available at https://sites.google.com/site/guoxian85/neggoa gxyu@swu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Prevalence and Associated Risk Factors of Anemia in Children and Adolescents with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Lin, Jin-Ding; Lin, Pei-Ying; Lin, Lan-Ping; Hsu, Shang-Wei; Loh, Ching-Hui; Yen, Chia-Feng; Fang, Wen-Hui; Chien, Wu-Chien; Tang, Chi-Chieh; Wu, Chia-Ling

    2010-01-01

    Anemia is known to be a significant public health problem in many countries. Most of the available information is incomplete or limited to special groups such as people with intellectual disability. The present study aims to provide the information of anemia prevalence and associated risk factors of children and adolescents with intellectual…

  17. Endoscopic papillectomy: risk factors for incomplete resection and recurrence during long-term follow-up.

    PubMed

    Ridtitid, Wiriyaporn; Tan, Damien; Schmidt, Suzette E; Fogel, Evan L; McHenry, Lee; Watkins, James L; Lehman, Glen A; Sherman, Stuart; Coté, Gregory A

    2014-02-01

    Endoscopic papillectomy is increasingly used as an alternative to surgery for ampullary adenomas and other noninvasive ampullary lesions. To measure short-term safety and efficacy of endoscopic papillectomy, define patient and lesion characteristics associated with incomplete endoscopic resection, and measure adenoma recurrence rates during long-term follow-up. Retrospective cohort study. Tertiary-care academic medical center. All patients who underwent endoscopic papillectomy for ampullary lesions between July 1995 and June 2012. Endoscopic papillectomy. Patient and lesion characteristics associated with incomplete endoscopic resection and ampullary adenoma-free survival analysis. We identified 182 patients who underwent endoscopic papillectomy, 134 (73.6%) having complete resection. Short-term adverse events occurred in 34 (18.7%). Risk factors for incomplete resection were jaundice at presentation (odds ratio [OR] 0.21; 95% confidence interval [CI] 0.07-0.69; P = .009), occult adenocarcinoma (OR 0.06; 95% CI, 0.01-0.36; P = .002), and intraductal involvement (OR 0.29; 95% CI, 0.11-0.75; P = .011). The en bloc resection technique was strongly associated with a higher rate of complete resection (OR 4.05; 95% CI, 1.71-9.59; P = .001). Among patients with ampullary adenoma who had complete resection (n = 107), 16 patients (15%) developed recurrence up to 65 months after resection. Retrospective analysis. Jaundice at presentation, occult adenocarcinoma in the resected specimen, and intraductal involvement are associated with a lower rate of complete resection, whereas en bloc papillectomy increases the odds of complete endoscopic resection. Despite complete resection, recurrence was observed up to 5 years after papillectomy, confirming the need for long-term surveillance. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  18. Incomplete adherence among treatment-experienced adults on antiretroviral therapy in Tanzania, Uganda and Zambia

    PubMed Central

    Denison, Julie A.; Koole, Olivier; Tsui, Sharon; Menten, Joris; Torpey, Kwasi; van Praag, Eric; Mukadi, Ya Diul; Colebunders, Robert; Auld, Andrew F.; Agolory, Simon; Kaplan, Jonathan E.; Mulenga, Modest; Kwesigabo, Gideon P.; Wabwire-Mangen, Fred; Bangsberg, David R.

    2016-01-01

    Objectives To characterize antiretroviral therapy (ART) adherence across different programmes and examine the relationship between individual and programme characteristics and incomplete adherence among ART clients in sub-Saharan Africa. Design A cross-sectional study. Methods Systematically selected ART clients (≥18 years; on ART ≥6 months) attending 18 facilities in three countries (250 clients/facility) were interviewed. Client self-reports (3-day, 30-day, Case Index ≥48 consecutive hours of missed ART), healthcare provider estimates and the pharmacy medication possession ratio (MPR) were used to estimate ART adherence. Participants from two facilities per country underwent HIV RNA testing. Optimal adherence measures were selected on the basis of degree of association with concurrent HIV RNA dichotomized at less than or greater/equal to 1000 copies/ml. Multivariate regression analysis, adjusted for site-level clustering, assessed associations between incomplete adherence and individual and programme factors. Results A total of 4489 participants were included, of whom 1498 underwent HIV RNA testing. Nonadherence ranged from 3.2% missing at least 48 consecutive hours to 40.1% having an MPR of less than 90%. The percentage with HIV RNA at least 1000 copies/ml ranged from 7.2 to 17.2% across study sites (mean = 9.9%). Having at least 48 consecutive hours of missed ART was the adherence measure most strongly related to virologic failure. Factors significantly related to incomplete adherence included visiting a traditional healer, screening positive for alcohol abuse, experiencing more HIV symptoms, having an ART regimen without nevirapine and greater levels of internalized stigma. Conclusion Results support more in-depth investigations of the role of traditional healers, and the development of interventions to address alcohol abuse and internalized stigma among treatment-experienced adult ART patients. PMID:25686684

  19. Calibration with confidence: a principled method for panel assessment.

    PubMed

    MacKay, R S; Kenna, R; Low, R J; Parker, S

    2017-02-01

    Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, 'true' values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options.

  20. Calibration with confidence: a principled method for panel assessment

    PubMed Central

    MacKay, R. S.; Low, R. J.; Parker, S.

    2017-01-01

    Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, ‘true’ values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options. PMID:28386432

  1. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    PubMed Central

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  2. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data.

    PubMed

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size.

  3. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  4. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    PubMed

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  5. Characterizing the Habitable Zone Planets of Kepler Stars

    NASA Astrophysics Data System (ADS)

    Fischer, Debra

    Planet Hunters (PH) is a well-established and successful web interface that allows citizen scientists to search for transiting planets in the NASA Kepler public archive data. Over the past 3 years, our users have made more than 20 million light curve classifications. We now have more than 300,000 users around the world. However, more than half of the Kepler data has not yet been displayed to our volunteers. In June 2014 we are launching Planet Hunters v2.0. The backend of the site has been completely redesigned. The new website is more intuitive and faster; we have improved the real-time weighting algorithm that assigns transit scores for faster and more accurate extraction of the transit events from the database. With Planet Hunters v2.0, we expect that assessments will be ten times faster, so that we have the opportunity to complete the classifications for the backlog of Kepler light curve in the next three years. There are three goals for this project. First, we will data-mine the PH classifications to search for long period planets with fewer than 5 transit events. We have demonstrated that our volunteers are efficient at detecting planets with long periods and radii greater than a few REARTH. This region of parameter space is optimal for characterizing larger planets orbiting close to the habitable zone. To build upon the citizen science efforts, we will model the light curves, search for evidence of false positives, and contribute observations of stellar spectra to refine both the stellar and orbital parameters. Second, we will carry out a careful analysis of the fraction of transits that are missed (a function of planet radius and orbital period) to derive observational incompleteness factors. The incompleteness factors will be combined with geometrical detection factors to assess the planet occurrence rate for wide separations. This is a unique scientific contribution current studies of planet occurrence rate are either restricted to orbital periods shorter than 100 days or they use extrapolation to estimate planet occurrence rates beyond 100 days. The new detections of transit candidates at wider separations and the incompleteness analysis will be used to carry out an analysis of the architecture of exoplanetary systems from 1 5 AU. We are synthesizing a statistical description with information from short-period Kepler transits, the longer period Kepler transit candidates from this proposal, a completeness analysis of radial velocity data, and statistical information from microlensing. While our architecture analysis will only sketch out the bare bones of planetary systems (massive or large planets), this is still a novel analysis that may point to the location of rocky planets if packed planetary systems prevail. Finally, we will expand our guest scientist program for serendipitous discoveries. We have already partnered with scientists who are searching for cataclysmic variables, heartbeat stars, and exomoons. Our undergrad students have already carried out summer research as guest scientists to characterize inflated jupiters, search for Trojan planets, and to search for microlensing events.

  6. Sensitivity and specificity of the 'knee-up test' for estimation of the American Spinal Injury Association Impairment Scale in patients with acute motor incomplete cervical spinal cord injury.

    PubMed

    Yugué, Itaru; Okada, Seiji; Maeda, Takeshi; Ueta, Takayoshi; Shiba, Keiichiro

    2018-04-01

    A retrospective study. Precise classification of the neurological state of patients with acute cervical spinal cord injury (CSCI) can be challenging. This study proposed a useful and simple clinical method to help classify patients with incomplete CSCI. Spinal Injuries Centre, Japan. The sensitivity and specificity of the 'knee-up test' were evaluated in patients with acute CSCI classified as American Spinal Injury Association Impairment Scale (AIS) C or D. The result is positive if the patient can lift the knee in one or both legs to an upright position, whereas the result is negative if the patient is unable to lift the knee in either leg to an upright position. The AIS of these patients was classified according to a strict computerised algorithm designed by Walden et al., and the knee-up test was tested by non-expert examiners. Among the 200 patients, 95 and 105 were classified as AIS C and AIS D, respectively. Overall, 126 and 74 patients demonstrated positive and negative results, respectively, when evaluated using the knee-up test. A total of 104 patients with positive results and 73 patients with negative results were classified as AIS D and AIS C, respectively. The sensitivity, specificity, positive predictive and negative predictive values of this test for all patients were 99.1, 76.8, 82.5 and 98.7, respectively. The knee-up test may allow easy and highly accurate estimation, without the need for special skills, of AIS classification for patients with incomplete CSCI.

  7. MULTI-SOURCE FEATURE LEARNING FOR JOINT ANALYSIS OF INCOMPLETE MULTIPLE HETEROGENEOUS NEUROIMAGING DATA

    PubMed Central

    Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping

    2012-01-01

    Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655

  8. Planning for natural regeneration of hardwoods in the Coastal Plain

    Treesearch

    Robert L. Johnson

    1978-01-01

    Hardwood species reproduce through seeding and sprouting. Frequent selective cuttings and small, incomplete openings favor tolerant species; the opposite conditions favor intolerants. Factors to be considered in evaluating and predicting reproduction before harvest are listed.

  9. Chromatic dispersion concentrator applied to photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Sassi, G.

    1980-01-01

    The aim of this paper is to show how it is possible to realize a chromatic dispersion concentrator which collects the different monochromatic components of the solar spectrum separately in subsequently concentric rings in the focal zone. This comes about without an increase in the energetic losses compared to any other type of concentrator. If different photovoltaic elements with energy gaps equal to the photon energy falling on the focal zone are put in the latter, energy losses due to incomplete utilization of the solar spectrum and to incomplete utilization of the energy of a single photon can be drastically reduced. How the losses due to the voltage factor and the fill-factor of the photovoltaic elements of the system can be reduced compared to the normal silicon cells is also demonstrated. The other contributions to losses in the conversion process have only been mentioned, foreseeing their possible variation.

  10. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  11. Shor's quantum factoring algorithm on a photonic chip.

    PubMed

    Politi, Alberto; Matthews, Jonathan C F; O'Brien, Jeremy L

    2009-09-04

    Shor's quantum factoring algorithm finds the prime factors of a large number exponentially faster than any other known method, a task that lies at the heart of modern information security, particularly on the Internet. This algorithm requires a quantum computer, a device that harnesses the massive parallelism afforded by quantum superposition and entanglement of quantum bits (or qubits). We report the demonstration of a compiled version of Shor's algorithm on an integrated waveguide silica-on-silicon chip that guides four single-photon qubits through the computation to factor 15.

  12. Computing rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less

  13. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    NASA Astrophysics Data System (ADS)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  14. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    NASA Astrophysics Data System (ADS)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  15. Combining point context and dynamic time warping for online gesture recognition

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less

  17. Crohn's Disease: Genetics Update.

    PubMed

    Wang, Ming-Hsi; Picco, Michael F

    2017-09-01

    Since the discovery of the first Crohn's disease (CD) gene NOD2 in 2001, 140 genetic loci have been found in whites using high-throughput genome-wide association studies. Several genes influence the CD subphenotypes and treatment response. With the observations of increasing prevalence in Asia and developing countries and the incomplete explanation of CD variance, other underexplored areas need to be integrated through novel methodologies. Algorithms that incorporate specific genetic risk alleles with other biomarkers will be developed and used to predict CD disease course, complications, and response to specific therapies, allowing precision medicine to become real in CD. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Incomplete Detection of Nonclassical Phase-Space Distributions

    NASA Astrophysics Data System (ADS)

    Bohmann, M.; Tiedau, J.; Bartley, T.; Sperling, J.; Silberhorn, C.; Vogel, W.

    2018-02-01

    We implement the direct sampling of negative phase-space functions via unbalanced homodyne measurement using click-counting detectors. The negativities significantly certify nonclassical light in the high-loss regime using a small number of detectors which cannot resolve individual photons. We apply our method to heralded single-photon states and experimentally demonstrate the most significant certification of nonclassicality for only two detection bins. By contrast, the frequently applied Wigner function fails to directly indicate such quantum characteristics for the quantum efficiencies present in our setup without applying additional reconstruction algorithms. Therefore, we realize a robust and reliable approach to characterize nonclassical light in phase space under realistic conditions.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  20. From intuition to statistics in building subsurface structural models

    USGS Publications Warehouse

    Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.

    2011-01-01

    Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.

  1. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  2. Approximations of thermoelastic and viscoelastic control systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Liu, Z. Y.; Miller, R. E.

    1990-01-01

    Well-posed models and computational algorithms are developed and analyzed for control of a class of partial differential equations that describe the motions of thermo-viscoelastic structures. An abstract (state space) framework and a general well-posedness result are presented that can be applied to a large class of thermo-elastic and thermo-viscoelastic models. This state space framework is used in the development of a computational scheme to be used in the solution of a linear quadratic regulator (LQR) control problem. A detailed convergence proof is provided for the viscoelastic model and several numerical results are presented to illustrate the theory and to analyze problems for which the theory is incomplete.

  3. Rotation to a Partially Specified Target Matrix in Exploratory Factor Analysis: How Many Targets?

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2013-01-01

    The purpose of this study was to explore the influence of the number of targets specified on the quality of exploratory factor analysis solutions with a complex underlying structure and incomplete substantive measurement theory. Three Monte Carlo studies were performed based on the ratio of the number of observed variables to the number of…

  4. Culture as a variable in health research: perspectives and caveats.

    PubMed

    Al-Bannay, Hana; Jarus, Tal; Jongbloed, Lyn; Yazigi, Maya; Dean, Elizabeth

    2014-09-01

    To augment the rigor of health promotion research, this perspective article describes how cultural factors impact the outcomes of health promotion studies either intentionally or unintentionally. It proposes ways in which these factors can be addressed or controlled in designing studies and interpreting their results. We describe how variation within and across cultures can be considered within a study, e.g. the conceptualization of research questions or hypotheses, and the methodology including sampling, surveys and interviews. We provide multiple examples of how culture influences the interpretation of study findings. Inadequately accounting or controlling for cultural variations in health promotion studies, whether they are planned or unplanned, can lead to incomplete research questions, incomplete data gathering, spurious results and limited generalizability of the findings. In health promotion research, factors related to culture and cultural variations need to be considered, acknowledged or controlled irrespective of the purpose of the study, to maximize the reliability, validity and generalizability of study findings. These issues are particularly relevant in contemporary health promotion research focusing on global lifestyle-related conditions where cultural factors have a pivotal role and warrant being understood. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Is curettage needed for uncomplicated incomplete spontaneous abortion?

    PubMed

    Ballagh, S A; Harris, H A; Demasio, K

    1998-11-01

    Spontaneous abortion occurs in 15% to 20% of all human pregnancies. Since the late 1800s, the management of incomplete spontaneous abortion has focused on using curettage to empty the uterus as quickly as possible. This practice began to reduce blood loss and infection and has been unquestioned for 4 decades. In today's medical climate, few spontaneous abortions are the resuslt of illegal manipulation, given the availability of legal pregnancy termination. Antibiotics and transfusions are available, should complications arise in conservatively managed cases. Two prospective randomized trials suggest that conservative management may be advantageous for women who have stable vital signs without evidence of infection. They will have fewer perforations and, possibly, fewer infections and uterine synechiae with expectant or medical management. Larger trials should be undertaken to critically assess surgical evacuation compared to medical management, factoring in the psychologic impact of treatment. We believe that medical management will prove to be the most appropriate treatment for uncomplicated spontaneous incomplete abortion in the 21st century.

  6. Maintaining vigilance on a simulated ATC monitoring task across repeated sessions.

    DOT National Transportation Integrated Search

    1994-03-01

    Maintaining alertness to information provided visually is an important aspect of air traffic controllers' work. Improper or incomplete scanning and monitoring behavior is often referred to as one of the causal factors associated with operational erro...

  7. Efficient network disintegration under incomplete information: the comic effect of link prediction

    NASA Astrophysics Data System (ADS)

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  8. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  9. Rare events modeling with support vector machine: Application to forecasting large-amplitude geomagnetic substorms and extreme events in financial markets.

    NASA Astrophysics Data System (ADS)

    Gavrishchaka, V. V.; Ganguli, S. B.

    2001-12-01

    Reliable forecasting of rare events in a complex dynamical system is a challenging problem that is important for many practical applications. Due to the nature of rare events, data set available for construction of the statistical and/or machine learning model is often very limited and incomplete. Therefore many widely used approaches including such robust algorithms as neural networks can easily become inadequate for rare events prediction. Moreover in many practical cases models with high-dimensional inputs are required. This limits applications of the existing rare event modeling techniques (e.g., extreme value theory) that focus on univariate cases. These approaches are not easily extended to multivariate cases. Support vector machine (SVM) is a machine learning system that can provide an optimal generalization using very limited and incomplete training data sets and can efficiently handle high-dimensional data. These features may allow to use SVM to model rare events in some applications. We have applied SVM-based system to the problem of large-amplitude substorm prediction and extreme event forecasting in stock and currency exchange markets. Encouraging preliminary results will be presented and other possible applications of the system will be discussed.

  10. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    PubMed

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  11. Efficient network disintegration under incomplete information: the comic effect of link prediction.

    PubMed

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-10

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the "comic effect" of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  12. Efficient network disintegration under incomplete information: the comic effect of link prediction

    PubMed Central

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-01-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized. PMID:26960247

  13. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    PubMed

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  15. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  16. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  17. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  18. Clinical antibacterial effectiveness and biocompatibility of gaseous ozone after incomplete caries removal.

    PubMed

    Krunić, Jelena; Stojanović, Nikola; Đukić, Ljiljana; Roganović, Jelena; Popović, Branka; Simić, Ivana; Stojić, Dragica

    2018-06-01

    To evaluate local effect of gaseous ozone on bacteria in deep carious lesions after incomplete caries removal, using chlorhexidine as control, and to investigate its effect on pulp vascular endothelial growth factor (VEGF), neuronal nitric oxide synthase (nNOS), and superoxide dismutase (SOD). Antibacterial effect was evaluated in 48 teeth with diagnosed deep carious lesion. After incomplete caries removal, teeth were randomly allocated into two groups regarding the cavity disinfectant used: ozone (open system) or 2% chlorhexidine. Dentin samples were analyzed for the presence of total bacteria and Lactobacillus spp. by real-time quantitative polymerase chain reaction. For evaluation of ozone effect on dental pulp, 38 intact permanent teeth indicated for pulp removal/tooth extraction were included. After cavity preparation, teeth were randomly allocated into two groups: ozone group and control group. VEGF/nNOS level and SOD activity in dental pulp were determined by enzyme-linked immunosorbent assay and spectrophotometric method, respectively. Ozone application decreased number of total bacteria (p = 0.001) and Lactobacillus spp. (p < 0.001), similarly to chlorhexidine. The VEGF (p < 0.001) and nNOS (p = 0.012) levels in dental pulp after ozone application were higher, while SOD activity was lower (p = 0.001) comparing to those in control pulp. Antibacterial effect of ozone on residual bacteria after incomplete caries removal was similar to that of 2% chlorhexidine. Effect of ozone on pulp VEGF, nNOS, and SOD indicated its biocompatibility. Ozone appears as effective and biocompatible cavity disinfectant in treatment of deep carious lesions by incomplete caries removal technique.

  19. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  20. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  1. Dioxins and Cardiovascular Mortality: A Review (EHP)

    EPA Science Inventory

    In spite of its large public health burden, the risk factors for cardiovascular disease remain incompletely understood. Here we review the association of cardiovascular disease (CVD) mortality with exposure to dioxin, a pollutant resulting from the production and combustion of ch...

  2. Matrix Gla Protein polymorphism, but not concentrations, is associated with radiographic hand osteoarthritis

    USDA-ARS?s Scientific Manuscript database

    Objective. Factors associated with mineralization and osteophyte formation in osteoarthritis (OA) are incompletely understood. Genetic polymorphisms of matrix Gla protein (MGP), a mineralization inhibitor, have been associated clinically with conditions of abnormal calcification. We therefore evalua...

  3. A proposed ecosystem services classification system to support green accounting

    EPA Science Inventory

    There are a multitude of actual or envisioned, complete or incomplete, ecosystem service classification systems being proposed to support Green Accounting. Green Accounting is generally thought to be the formal accounting attempt to factor environmental production into National ...

  4. Recommendations for shoulder restraint installation in general aviation aircraft.

    DOT National Transportation Integrated Search

    1966-09-01

    The use of inadequate or incomplete body restraint systems is a major factor in the current trend of increasing serious and fatal type injuries reported from general aviation accidents. An analysis of these accident injuries and conditions clearly in...

  5. Bunch-Kaufman factorization for real symmetric indefinite banded matrices

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1989-01-01

    The Bunch-Kaufman algorithm for factoring symmetric indefinite matrices was rejected for banded matrices because it destroys the banded structure of the matrix. Herein, it is shown that for a subclass of real symmetric matrices which arise in solving the generalized eigenvalue problem using Lanczos's method, the Bunch-Kaufman algorithm does not result in major destruction of the bandwidth. Space time complexities of the algorithm are given and used to show that the Bunch-Kaufman algorithm is a significant improvement over LU factorization.

  6. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  7. Levels and trends of child and adult mortality rates in the Islamic Republic of Iran, 1990-2013; protocol of the NASBOD study.

    PubMed

    Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran

    2014-03-01

    Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.

  8. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  9. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    PubMed Central

    Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard

    2008-01-01

    Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205

  10. Preserving sparseness in multivariate polynominal factorization

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1977-01-01

    Attempts were made to factor these ten polynomials on MACSYMA. However it did not get very far with any of the larger polynomials. At that time, MACSYMA used an algorithm created by Wang and Rothschild. This factoring algorithm was also implemented for the symbolic manipulation system, SCRATCHPAD of IBM. A closer look at this old factoring algorithm revealed three problem areas, each of which contribute to losing sparseness and intermediate expression growth. This study led to effective ways of avoiding these problems and actually to a new factoring algorithm. The three problems are known as the extraneous factor problem, the leading coefficient problem, and the bad zero problem. These problems are examined separately. Their causes and effects are set forth in detail; the ways to avoid or lessen these problems are described.

  11. Communication-avoiding symmetric-indefinite factorization

    DOE PAGES

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James; ...

    2014-11-13

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  12. Communication-avoiding symmetric-indefinite factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  13. Power of automated algorithms for combining time-line follow-back and urine drug screening test results in stimulant-abuse clinical trials.

    PubMed

    Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel

    2011-09-01

    In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research Monograph 167: The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. NTIS PB 97175889. GPO 017-024-01607-1. Bethesda, MD: National Institutes of Health, 1997).

  14. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  15. A Study of Algorithms for Covariance Structure Analysis with Specific Comparisons Using Factor Analysis.

    ERIC Educational Resources Information Center

    Lee, S. Y.; Jennrich, R. I.

    1979-01-01

    A variety of algorithms for analyzing covariance structures are considered. Additionally, two methods of estimation, maximum likelihood, and weighted least squares are considered. Comparisons are made between these algorithms and factor analysis. (Author/JKS)

  16. Optimization of the sources in local hyperthermia using a combined finite element-genetic algorithm method.

    PubMed

    Siauve, N; Nicolas, L; Vollaire, C; Marchal, C

    2004-12-01

    This article describes an optimization process specially designed for local and regional hyperthermia in order to achieve the desired specific absorption rate in the patient. It is based on a genetic algorithm coupled to a finite element formulation. The optimization method is applied to real human organs meshes assembled from computerized tomography scans. A 3D finite element formulation is used to calculate the electromagnetic field in the patient, achieved by radiofrequency or microwave sources. Space discretization is performed using incomplete first order edge elements. The sparse complex symmetric matrix equation is solved using a conjugate gradient solver with potential projection pre-conditionning. The formulation is validated by comparison of calculated specific absorption rate distributions in a phantom to temperature measurements. A genetic algorithm is used to optimize the specific absorption rate distribution to predict the phases and amplitudes of the sources leading to the best focalization. The objective function is defined as the specific absorption rate ratio in the tumour and healthy tissues. Several constraints, regarding the specific absorption rate in tumour and the total power in the patient, may be prescribed. Results obtained with two types of applicators (waveguides and annular phased array) are presented and show the faculties of the developed optimization process.

  17. Improving Correlation Algorithms to Detect and Characterize Smaller Magnitude Induced Seismicity Swarms

    NASA Astrophysics Data System (ADS)

    Skoumal, R.; Brudzinski, M.; Currie, B.

    2015-12-01

    Induced seismic sequences often occur as swarms that can include thousands of small (< M 2) earthquakes. While the identification of this microseismicity would invariably aid in the characterization and modeling of induced sequences, traditional earthquake detection techniques often provide incomplete catalogs, even when local networks are deployed. Because induced sequences often include scores of micro-earthquakes that prelude larger magnitude events, the identification of these small magnitude events would be crucial for the early identification of induced sequences. By taking advantage of the repeating, swarm-like nature of induced seismicity, a more robust catalog can be created using complementary correlation algorithms in near real-time without the reliance on traditional earthquake detection and association routines. Since traditional earthquake catalog methodologies using regional networks have a relatively high detection threshold (M 2+), we have sought to develop correlation routines that can detect smaller magnitude sequences. While short-term/long-term amplitude average detection algorithms requires significant signal-to-noise ratio at multiple stations for confident identification, a correlation detector is capable of identifying earthquakes with high confidence using just a single station. The result is an embarrassingly parallel task that can be employed for a network to be used as an early warning system for potentially induced seismicity while also better characterizing tectonic sequences beyond what traditional methods allow.

  18. Toward open set recognition.

    PubMed

    Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E

    2013-07-01

    To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.

  19. A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.

    PubMed

    Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee

    2018-05-01

    Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymund, T.D.

    Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less

  1. Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm

    NASA Astrophysics Data System (ADS)

    Backes, Werner; Wetzel, Susanne

    In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.

  2. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  3. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2016-10-01

    Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.

  4. Mark-resight abundance estimation under incomplete identification of marked individuals

    USGS Publications Warehouse

    McClintock, Brett T.; Hill, Jason M.; Fritz, Lowell; Chumbley, Kathryn; Luxa, Katie; Diefenbach, Duane R.

    2014-01-01

    Often less expensive and less invasive than conventional mark–recapture, so-called 'mark-resight' methods are popular in the estimation of population abundance. These methods are most often applied when a subset of the population of interest is marked (naturally or artificially), and non-invasive sighting data can be simultaneously collected for both marked and unmarked individuals. However, it can often be difficult to identify marked individuals with certainty during resighting surveys, and incomplete identification of marked individuals is potentially a major source of bias in mark-resight abundance estimators. Previously proposed solutions are ad hoc and will tend to underperform unless marked individual identification rates are relatively high (>90%) or individual sighting heterogeneity is negligible.Based on a complete data likelihood, we present an approach that properly accounts for uncertainty in marked individual detection histories when incomplete identifications occur. The models allow for individual heterogeneity in detection, sampling with (e.g. Poisson) or without (e.g. Bernoulli) replacement, and an unknown number of marked individuals. Using a custom Markov chain Monte Carlo algorithm to facilitate Bayesian inference, we demonstrate these models using two example data sets and investigate their properties via simulation experiments.We estimate abundance for grassland sparrow populations in Pennsylvania, USA when sampling was conducted with replacement and the number of marked individuals was either known or unknown. To increase marked individual identification probabilities, extensive territory mapping was used to assign incomplete identifications to individuals based on location. Despite marked individual identification probabilities as low as 67% in the absence of this territorial mapping procedure, we generally found little return (or need) for this time-consuming investment when using our proposed approach. We also estimate rookery abundance from Alaskan Steller sea lion counts when sampling was conducted without replacement, the number of marked individuals was unknown, and individual heterogeneity was suspected as non-negligible.In terms of estimator performance, our simulation experiments and examples demonstrated advantages of our proposed approach over previous methods, particularly when marked individual identification probabilities are low and individual heterogeneity levels are high. Our methodology can also reduce field effort requirements for marked individual identification, thus, allowing potential investment into additional marking events or resighting surveys.

  5. Multidimensional generalized-ensemble algorithms for complex systems.

    PubMed

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  6. MUFACT: An Algorithm for Multiple Factor Analyses of Singular and Nonsingular Data with Orthogonal and Oblique Transformation Solutions

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    1978-01-01

    A general factor analysis computer algorithm is briefly discussed. The algorithm is highly transportable with minimum limitations on the number of observations. Both singular and non-singular data can be analyzed. (Author/JKS)

  7. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  8. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  9. Predictors of seizure freedom after incomplete resection in children.

    PubMed

    Perry, M S; Dunoyer, C; Dean, P; Bhatia, S; Bavariya, A; Ragheb, J; Miller, I; Resnick, T; Jayakar, P; Duchowny, M

    2010-10-19

    Incomplete resection of the epileptogenic zone (EZ) is the most important predictor of poor outcome after resective surgery for intractable epilepsy. We analyzed the contribution of preoperative and perioperative variables including MRI and EEG data as predictors of seizure-free (SF) outcome after incomplete resection. We retrospectively reviewed patients <18 years of age with incomplete resection for epilepsy with 2 years of follow-up. Fourteen preoperative and perioperative variables were compared in SF and non-SF (NSF) patients. We compared lesional patients, categorized by reason for incompleteness, to lesional patients with complete resection. We analyzed for effect of complete EEG resection on SF outcome in patients with incompletely resected MRI lesions and vice versa. Eighty-three patients with incomplete resection were included with 41% becoming SF. Forty-eight lesional patients with complete resection were included. Thirty-eight percent (57/151) of patients with incomplete resection and 34% (47/138) with complete resection were excluded secondary to lack of follow-up or incomplete records. Contiguous MRI lesions were predictive of seizure freedom after incomplete resection. Fifty-seven percent of patients incomplete by MRI alone, 52% incomplete by EEG alone, and 24% incomplete by both became SF compared to 77% of patients with complete resection (p = 0.0005). Complete resection of the MRI- and EEG-defined EZ is the best predictor of seizure freedom, though patients incomplete by EEG or MRI alone have better outcome compared to patients incomplete by both. More than one-third of patients with incomplete resection become SF, with contiguous MRI lesions a predictor of SF outcome.

  10. 42 CFR 438.6 - Contract requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE General Provisions § 438.6 Contract requirements. (a) Regional...) Terminology. As used in this paragraph, the following terms have the indicated meanings: (i) Actuarially sound... adjustments to account for factors such as medical trend inflation, incomplete data, MCO, PIHP, or PAHP...

  11. 42 CFR 438.6 - Contract requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE General Provisions § 438.6 Contract requirements. (a) Regional...) Terminology. As used in this paragraph, the following terms have the indicated meanings: (i) Actuarially sound... adjustments to account for factors such as medical trend inflation, incomplete data, MCO, PIHP, or PAHP...

  12. 42 CFR 438.6 - Contract requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE General Provisions § 438.6 Contract requirements. (a) Regional...) Terminology. As used in this paragraph, the following terms have the indicated meanings: (i) Actuarially sound... adjustments to account for factors such as medical trend inflation, incomplete data, MCO, PIHP, or PAHP...

  13. The recovery status from delayed graft function can predict long-term outcome after deceased donor kidney transplantation.

    PubMed

    Lee, Juhan; Song, Seung Hwan; Lee, Jee Youn; Kim, Deok Gie; Lee, Jae Geun; Kim, Beom Seok; Kim, Myoung Soo; Huh, Kyu Ha

    2017-10-20

    The effect of delayed graft function (DGF) recovery on long-term graft outcome is unclear. The aim of this study was to examine the association of DGF recovery status with long-term outcome. We analyzed 385 recipients who underwent single kidney transplantation from brain-dead donors between 2004 and 2015. Patients were grouped according to renal function at 1 month post-transplantation: control (without DGF); recovered DGF (glomerular filtration rate [GFR] ≥ 30 mL/min/1.73 m 2 ); and incompletely recovered DGF group (GFR < 30 mL/min/1.73 m 2 ). DGF occurred in 104 of 385 (27%) recipients. Of the DGF patients, 70 recovered from DGF and 34 incompletely recovered from DGF. Death-censored graft survival rates for control, recovered DGF, and incompletely recovered DGF groups were 95.3%, 94.7%, and 80.7%, respectively, at 5 years post-transplantation (P = 0.003). Incompletely recovered DGF was an independent risk factor for death-censored graft loss (HR = 3.410, 95%CI, 1.114-10.437). DGF was associated with increased risk for patient death regardless of DGF recovery status. Mean GFRs at 5 years were 65.5 ± 20.8, 62.2 ± 27.0, and 45.8 ± 15.4 mL/min/1.73 m 2 for control, recovered, and incompletely recovered DGF groups, respectively (P < 0.001). Control group and recovered DGF patients had similar renal outcomes. However, DGF was associated with increased risk for patient death regardless of DGF recovery status.

  14. Methane, Black Carbon, and Ethane Emissions from Natural Gas Flares in the Bakken Shale, North Dakota.

    PubMed

    Gvakharia, Alexander; Kort, Eric A; Brandt, Adam; Peischl, Jeff; Ryerson, Thomas B; Schwarz, Joshua P; Smith, Mackenzie L; Sweeney, Colm

    2017-05-02

    Incomplete combustion during flaring can lead to production of black carbon (BC) and loss of methane and other pollutants to the atmosphere, impacting climate and air quality. However, few studies have measured flare efficiency in a real-world setting. We use airborne data of plume samples from 37 unique flares in the Bakken region of North Dakota in May 2014 to calculate emission factors for BC, methane, ethane, and combustion efficiency for methane and ethane. We find no clear relationship between emission factors and aircraft-level wind speed or between methane and BC emission factors. Observed median combustion efficiencies for methane and ethane are close to expected values for typical flares according to the US EPA (98%). However, we find that the efficiency distribution is skewed, exhibiting log-normal behavior. This suggests incomplete combustion from flares contributes almost 1/5 of the total field emissions of methane and ethane measured in the Bakken shale, more than double the expected value if 98% efficiency was representative. BC emission factors also have a skewed distribution, but we find lower emission values than previous studies. The direct observation for the first time of a heavy-tail emissions distribution from flares suggests the need to consider skewed distributions when assessing flare impacts globally.

  15. Risk factors leading to mucoperiosteal flap necrosis after primary palatoplasty in patents with cleft palate.

    PubMed

    Rossell-Perry, Percy; Figallo-Hudtwalcker, Olga; Vargas-Chanduvi, Roberto; Calderon-Ayvar, Yvette; Romero-Narvaez, Carolina

    2017-10-01

    Few studies have been published reporting risk factors for flap necrosis after primary palatoplasty in patients with cleft palate. This complication is rare, and the event is a disaster for both the patient and the surgeon. This study was performed to explore the associations between different risk factors and the development of flap necrosis after primary palatoplasty in patients with cleft palate. This is a case-control study. A 20 years retrospective analysis (1994-2015) of patients with nonsyndromic cleft palate was identified from medical records and screening day registries). Demographical and risk factor data were collected using a patient´s report, including information about age at surgery, gender, cleft palate type, and degree of severity. Odds ratios and 95% confident intervals were derived from logistic regression analysis. All cases with diagnoses of flap necrosis after primary palatoplasty were included in the study (48 patients) and 156 controls were considered. In multivariate analysis, female sex, age (older than 15 years), cleft type (bilateral and incomplete), and severe cleft palate index were associated with significantly increased risk for flap necrosis. The findings suggest that female sex, older age, cleft type (bilateral and incomplete), and severe cleft palatal index may be associated with the development of flap necrosis after primary palatoplasty in patients with cleft palate.

  16. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGES

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  17. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  18. Addition and subtraction by students with Down syndrome

    NASA Astrophysics Data System (ADS)

    Noda Herrera, Aurelia; Bruno, Alicia; González, Carina; Moreno, Lorenzo; Sanabria, Hilda

    2011-01-01

    We present a research report on addition and subtraction conducted with Down syndrome students between the ages of 12 and 31. We interviewed a group of students with Down syndrome who executed algorithms and solved problems using specific materials and paper and pencil. The results show that students with Down syndrome progress through the same procedural levels as those without disabilities though they have difficulties in reaching the most abstract level (numerical facts). The use of fingers or concrete representations (balls) appears as a fundamental process among these students. As for errors, these vary widely depending on the students, and can be attributed mostly to an incomplete knowledge of the decimal number system.

  19. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  20. Key Generation for Fast Inversion of the Paillier Encryption Function

    NASA Astrophysics Data System (ADS)

    Hirano, Takato; Tanaka, Keisuke

    We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

  1. Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review

    PubMed Central

    van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M

    2015-01-01

    Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative. PMID:26316651

  2. Recurrence rates and clinical outcome for dogs with grade II mast cell tumours with a low AgNOR count and Ki67 index treated with surgery alone.

    PubMed

    Smith, J; Kiupel, M; Farrelly, J; Cohen, R; Olmsted, G; Kirpensteijn, J; Brocks, B; Post, G

    2017-03-01

    Grade II mast cell tumours (MCT) are tumours with variable biologic behaviour. Multiple factors have been associated with outcome, including proliferation markers. The purpose of this study was to determine if extent of surgical excision affects recurrence rate in dogs with grade II MCT with low proliferation activity, determined by Ki67 and argyrophilic nucleolar organising regions (AgNOR). Eighty-six dogs with cutaneous MCT were evaluated. All dogs had surgical excision of their MCT with a low Ki67 index and combined AgNORxKi67 (Ag67) values. Twenty-three (27%) dogs developed local or distant recurrence during the median follow-up time. Of these dogs, six (7%) had local recurrence, one had complete and five had incomplete histologic margins. This difference in recurrence rates between dogs with complete and incomplete histologic margins was not significant. On the basis of this study, ancillary therapy may not be necessary for patients with incompletely excised grade II MCT with low proliferation activity. © 2015 John Wiley & Sons Ltd.

  3. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  4. Comparing the refuge strategy for managing the evolution of insect resistance under different reproductive strategies.

    PubMed

    Crowder, David W; Carrière, Yves

    2009-12-07

    Genetically modified (GM) crops are used extensively worldwide to control diploid agricultural insect pests that reproduce sexually. However, future GM crops will likely soon target haplodiploid and parthenogenetic insects. As rapid pest adaptation could compromise these novel crops, strategies to manage resistance in haplodiploid and parthenogenetic pests are urgently needed. Here, we developed models to characterize factors that could delay or prevent the evolution of resistance to GM crops in diploid, haplodiploid, and parthenogenetic insect pests. The standard strategy for managing resistance in diploid pests relies on refuges of non-GM host plants and GM crops that produce high toxin concentrations. Although the tenets of the standard refuge strategy apply to all pests, this strategy does not greatly delay the evolution of resistance in haplodiploid or parthenogenetic pests. Two additional factors are needed to effectively delay or prevent the evolution of resistance in such pests, large recessive or smaller non-recessive fitness costs must reduce the fitness of resistance individuals in refuges (and ideally also on GM crops), and resistant individuals must have lower fitness on GM compared to non-GM crops (incomplete resistance). Recent research indicates that the magnitude and dominance of fitness costs could be increased by using specific host-plants, natural enemies, or pathogens. Furthermore, incomplete resistance could be enhanced by engineering desirable traits into novel GM crops. Thus, the sustainability of GM crops that target haplodiploid or parthenogenetic pests will require careful consideration of the effects of reproductive mode, fitness costs, and incomplete resistance.

  5. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    PubMed

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  6. The Psychology of Judgment for Outdoor Leaders.

    ERIC Educational Resources Information Center

    Clement, Kent

    Judgment is the process of making decisions with incomplete information concerning either the outcomes or the decision factors. Sound judgment that leads to good decisions is an essential skill needed by adventure education and outdoor leadership professionals. Cognitive psychology provides several theories and insights concerning the accuracy of…

  7. Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance.

    PubMed

    Vandersypen, L M; Steffen, M; Breyta, G; Yannoni, C S; Sherwood, M H; Chuang, I L

    The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes. Quantum computers, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm. Although important for the study of quantum computers, experimental demonstration of this algorithm has proved elusive. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects in our system.

  8. Assessment of Cardiovascular Disease Risk in South Asian Populations

    PubMed Central

    Hussain, S. Monira; Oldenburg, Brian; Zoungas, Sophia; Tonkin, Andrew M.

    2013-01-01

    Although South Asian populations have high cardiovascular disease (CVD) burden in the world, their patterns of individual CVD risk factors have not been fully studied. None of the available algorithms/scores to assess CVD risk have originated from these populations. To explore the relevance of CVD risk scores for these populations, literature search and qualitative synthesis of available evidence were performed. South Asians usually have higher levels of both “classical” and nontraditional CVD risk factors and experience these at a younger age. There are marked variations in risk profiles between South Asian populations. More than 100 risk algorithms are currently available, with varying risk factors. However, no available algorithm has included all important risk factors that underlie CVD in these populations. The future challenge is either to appropriately calibrate current risk algorithms or ideally to develop new risk algorithms that include variables that provide an accurate estimate of CVD risk. PMID:24163770

  9. Risk factors for severe acute malnutrition in children below 5 y of age in India: a case-control study.

    PubMed

    Mishra, Kirtisudha; Kumar, Praveen; Basu, Srikanta; Rai, Kiran; Aneja, Satinder

    2014-08-01

    To determine the possible risk factors for severe acute malnutrition (SAM) in children below 5 y admitted in a hospital in north India. This case-control study was conducted in a medical college hospital in children below 5 y of age. All cases of SAM (diagnosed as per WHO definition) between 6 and 59 mo of age were compared with age-matched controls with weight for height above -2SD of WHO 2006 growth standards. Data regarding socio-demographic parameters, feeding practices and immunization were compared between the groups by univariable and multivariable logistic regression models. A total of 76 cases and 115 controls were enrolled. Among the 14 factors compared, maternal illiteracy, daily family income less than Rs. 200, large family size, lack of exclusive breast feeding in first 6 mo, bottle feeding, administration of pre-lacteals, deprivation of colostrum and incomplete immunization were significant risk factors for SAM. Regarding complementary feeding, it was the consistency, rather than the age of initiation, frequency and variety which showed a significant influence on occurrence of SAM. Multivariate analysis revealed that the risk of SAM was independently associated with 6 factors, namely, illiteracy among mothers, incomplete immunization, practice of bottle feeding, consistency of complementary feeding, deprivation of colostrum and receipt of pre-lacteals at birth. The present study identifies certain risk factors which need to be focused on during health planning and policy making related to children with SAM in India.

  10. The clinical reasoning process in randomized clinical trials with patients with non-specific neck pain is incomplete: A systematic review.

    PubMed

    Maissan, Francois; Pool, Jan; de Raaij, Edwin; Mollema, Jürgen; Ostelo, Raymond; Wittink, Harriet

    2018-06-01

    Primarily to evaluate the completeness of the description of the clinical reasoning process in RCTs with patients with non-specific neck pain with an argued or diagnosed cause i.e. an impairment or activity limitation. Secondly, to determine the association between the completeness of the clinical reasoning process and the degree of risk of bias. Pubmed, Cinahl and PEDro were systematically searched from inception to July 2016. RCTs (n = 122) with patients with non-specific neck pain receiving physiotherapy treatment published in English were included. Data extraction included study characteristics and important features of the clinical reasoning process based on the Hypothesis-Oriented Algorithm for Clinicians II (HOAC II)]. Thirty-seven studies (30%) had a complete clinical reasoning process of which 8 (6%) had a 'diagnosed cause' and 29 (24%) had an 'argued cause'. The Spearmans rho association between the extent of the clinical reasoning process and the risk of bias was -0.2. In the majority of studies (70%) the described clinical reasoning process was incomplete. A very small proportion (6%) had a 'diagnosed cause'. Therefore, a better methodological quality does not necessarily imply a better described clinical reasoning process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Sensor Fusion to Infer Locations of Standing and Reaching Within the Home in Incomplete Spinal Cord Injury.

    PubMed

    Lonini, Luca; Reissman, Timothy; Ochoa, Jose M; Mummidisetty, Chaithanya K; Kording, Konrad; Jayaraman, Arun

    2017-10-01

    The objective of rehabilitation after spinal cord injury is to enable successful function in everyday life and independence at home. Clinical tests can assess whether patients are able to execute functional movements but are limited in assessing such information at home. A prototype system is developed that detects stand-to-reach activities, a movement with important functional implications, at multiple locations within a mock kitchen. Ten individuals with incomplete spinal cord injuries performed a sequence of standing and reaching tasks. The system monitored their movements by combining two sources of information: a triaxial accelerometer, placed on the subject's thigh, detected sitting or standing, and a network of radio frequency tags, wirelessly connected to a wrist-worn device, detected reaching at three locations. A threshold-based algorithm detected execution of the combined tasks and accuracy was measured by the number of correctly identified events. The system was shown to have an average accuracy of 98% for inferring when individuals performed stand-to-reach activities at each tag location within the same room. The combination of accelerometry and tags yielded accurate assessments of functional stand-to-reach activities within a home environment. Optimization of this technology could simplify patient compliance and allow clinicians to assess functional home activities.

  12. Point coordinates extraction from localized hyperbolic reflections in GPR data

    NASA Astrophysics Data System (ADS)

    Ristić, Aleksandar; Bugarinović, Željko; Vrtunski, Milan; Govedarica, Miro

    2017-09-01

    In this paper, we propose an automated detection algorithm for the localization of apexes and points on the prongs of hyperbolic reflection incurred as a result of GPR scanning technology. The objects of interest encompass cylindrical underground utilities that have a distinctive form of hyperbolic reflection in radargram. Algorithm involves application of trained neural network to analyze radargram in the form of raster image, resulting with extracted segments of interest that contain hyperbolic reflections. This significantly reduces the amount of data for further analysis. Extracted segments represent the zone for localization of apices. This is followed by extraction of points on prongs of hyperbolic reflections which is carried out until stopping criterion is satisfied, regardless the borders of segment of interest. In final step a classification of false hyperbolic reflections caused by the constructive interference and their removal is done. The algorithm is implemented in MATLAB environment. There are several advantages of the proposed algorithm. It can successfully recognize true hyperbolic reflections in radargram images and extracts coordinates, with very low rate of false detections and without prior knowledge about the number of hyperbolic reflections or buried utilities. It can be applied to radargrams containing single and multiple hyperbolic reflections, intersected, distorted, as well as incomplete (asymmetric) hyperbolic reflections, all in the presence of higher level of noise. Special feature of algorithm is developed procedure for analysis and removal of false hyperbolic reflections generated by the constructive interference from reflectors associated with the utilities. Algorithm was tested on a number of synthetic and radargram acquired in the field survey. To illustrate the performances of the proposed algorithm, we present the characteristics of the algorithm through five representative radargrams obtained in real conditions. In these examples we present different acquisition scenarios by varying the number of buried objects, their disposition, size, and level of noise. Example with highest complexity was tested also as a synthetic radargram generated by gprMax. Processing time in examples with one or two hyperbolic reflections is from 0.1 to 0.3 s, while for the most complex examples it is from 2.2 to 4.9 s. In general, the obtained experimental results show that the proposed algorithm exhibits promising performances both in terms of utility detection and processing speed of the algorithm.

  13. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  14. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  15. Reconstruction of combustion temperature and gas concentration distributions using line-of-sight tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Zhirong; Sun, Pengshuai; Pang, Tao; Xia, Hua; Cui, Xiaojuan; Li, Zhe; Han, Luo; Wu, Bian; Wang, Yu; Sigrist, Markus W.; Dong, Fengzhong

    2016-07-01

    Spatial temperature and gas concentration distributions are crucial for combustion studies to characterize the combustion position and to evaluate the combustion regime and the released heat quantity. Optical computer tomography (CT) enables the reconstruction of temperature and gas concentration fields in a flame on the basis of line-of-sight tunable diode laser absorption spectroscopy (LOS-TDLAS). A pair of H2O absorption lines at wavelengths 1395.51 and 1395.69 nm is selected. Temperature and H2O concentration distributions for a flat flame furnace are calculated by superimposing two absorption peaks with a discrete algebraic iterative algorithm and a mathematical fitting algorithm. By comparison, direct absorption spectroscopy measurements agree well with the thermocouple measurements and yield a good correlation. The CT reconstruction data of different air-to-fuel ratio combustion conditions (incomplete combustion and full combustion) and three different types of burners (one, two, and three flat flame furnaces) demonstrate that TDLAS has the potential of short response time and enables real-time temperature and gas concentration distribution measurements for combustion diagnosis.

  16. Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems

    NASA Astrophysics Data System (ADS)

    Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn

    The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.

  17. Natural pixel decomposition for computational tomographic reconstruction from interferometric projection: algorithms and comparison

    NASA Astrophysics Data System (ADS)

    Cha, Don J.; Cha, Soyoung S.

    1995-09-01

    A computational tomographic technique, termed the variable grid method (VGM), has been developed for improving interferometric reconstruction of flow fields under ill-posed data conditions of restricted scanning and incomplete projection. The technique is based on natural pixel decomposition, that is, division of a field into variable grid elements. The performances of two algorithms, that is, original and revised versions, are compared to investigate the effects of the data redundancy criteria and seed element forming schemes. Tests of the VGMs are conducted through computer simulation of experiments and reconstruction of fields with a limited view angel of 90 degree(s). The temperature fields at two horizontal sections of a thermal plume of two interacting isothermal cubes, produced by a finite numerical code, are analyzed as test fields. The computer simulation demonstrates the superiority of the revised VGM to either the conventional fixed grid method or the original VGM. Both the maximum and average reconstruction errors are reduced appreciably. The reconstruction shows substantial improvement in the regions with dense scanning by probing rays. These regions are usually of interest in engineering applications.

  18. Korean Medication Algorithm for Depressive Disorder: Comparisons with Other Treatment Guidelines

    PubMed Central

    Wang, Hee Ryung; Park, Young-Min; Lee, Hwang Bin; Song, Hoo Rim; Jeong, Jong-Hyun; Seo, Jeong Seok; Lim, Eun-Sung; Hong, Jeong-Wan; Kim, Won; Jon, Duk-In; Hong, Jin-Pyo; Woo, Young Sup; Min, Kyung Joon

    2014-01-01

    We aimed to compare the recommendations of the Korean Medication Algorithm Project for Depressive Disorder 2012 (KMAP-DD 2012) with other recently published treatment guidelines for depressive disorder. We reviewed a total of five recently published global treatment guidelines and compared each treatment recommendation of the KMAP-DD 2012 with those in other guidelines. For initial treatment recommendations, there were no significant major differences across guidelines. However, in the case of nonresponse or incomplete response to initial treatment, the second recommended treatment step varied across guidelines. For maintenance therapy, medication dose and duration differed among treatment guidelines. Further, there were several discrepancies in the recommendations for each subtype of depressive disorder across guidelines. For treatment in special populations, there were no significant differences in overall recommendations. This comparison identifies that, by and large, the treatment recommendations of the KMAP-DD 2012 are similar to those of other treatment guidelines and reflect current changes in prescription pattern for depression based on accumulated research data. Further studies will be needed to address several issues identified in our review. PMID:24605117

  19. Breaking the indexing ambiguity in serial crystallography.

    PubMed

    Brehm, Wolfgang; Diederichs, Kay

    2014-01-01

    In serial crystallography, a very incomplete partial data set is obtained from each diffraction experiment (a `snapshot'). In some space groups, an indexing ambiguity exists which requires that the indexing mode of each snapshot needs to be established with respect to a reference data set. In the absence of such re-indexing information, crystallographers have thus far resorted to a straight merging of all snapshots, yielding a perfectly twinned data set of higher symmetry which is poorly suited for structure solution and refinement. Here, two algorithms have been designed for assembling complete data sets by clustering those snapshots that are indexed in the same way, and they have been tested using 15,445 snapshots from photosystem I [Chapman et al. (2011), Nature (London), 470, 73-77] and with noisy model data. The results of the clustering are unambiguous and enabled the construction of complete data sets in the correct space group P63 instead of (twinned) P6322 that researchers have been forced to use previously in such cases of indexing ambiguity. The algorithms thus extend the applicability and reach of serial crystallography.

  20. Direct localization of poles of a meromorphic function from measurements on an incomplete boundary

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Ando, Shigeru

    2010-01-01

    This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.

  1. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  2. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  3. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  4. Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor

    PubMed Central

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925

  5. Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.

    PubMed

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  6. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    NASA Technical Reports Server (NTRS)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  7. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  8. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    PubMed

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.

  9. Longitudinal Mercury Monitoring Within the Japanese and Korean Communities (United States): Implications for Exposure Determination and Public Health Protection

    EPA Science Inventory

    Background: Estimates of exposure to toxicants are predominantly obtained from single timepoint data. Fishconsumption guidance based on these data may be incomplete as recommendations are unlikely to consider impact from factors such as intraindividual variability, seasonal dif...

  10. School Finance Reform: Factors that Mediate Legal Initiatives.

    ERIC Educational Resources Information Center

    Sweetland, Scott R.

    2000-01-01

    Although the Ohio Supreme Court announced its unconstitutionality verdict 3 years ago, litigation and outcomes are incomplete. Due to legislative and referenda failures, implementation has reverted to the judiciary branch. An effective solution may be to address school-finance reform on a case-by-case basis. (Contains 35 references.) (MLH)

  11. Examining the Factors That Facilitate Athletic Training Faculty Socialization into Higher Education

    ERIC Educational Resources Information Center

    Mazerolle, Stephanie M.; Barrett, Jessica L.; Nottingham, Sara

    2016-01-01

    Context: Doctoral education is the mechanism whereby athletic trainers can develop an awareness of their future roles and responsibilities in higher education. Evidence suggests that doctoral education may provide an incomplete understanding of these roles and responsibilities, warranting further investigation. Objective: To gain a better…

  12. Exploring Incomplete Rating Designs with Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.; Patil, Yogendra J.

    2018-01-01

    Recent research has explored the use of models adapted from Mokken scale analysis as a nonparametric approach to evaluating rating quality in educational performance assessments. A potential limiting factor to the widespread use of these techniques is the requirement for complete data, as practical constraints in operational assessment systems…

  13. A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan

    2008-03-01

    The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.

  14. A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence

    PubMed Central

    Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan

    2017-01-01

    The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206

  15. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  16. Extracting atmospheric turbulence and aerosol characteristics from passive imagery

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.

    2013-09-01

    Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.

  17. Matriculation Research Report: Incomplete Grades; Data & Analysis.

    ERIC Educational Resources Information Center

    Gerda, Joe

    The policy on incomplete grades at California's College of the Canyons states that incompletes may only be given under circumstances beyond students' control and that students must make arrangements with faculty prior to the end of the semester to clear the incomplete. Failure to complete an incomplete may result in an "F" grade. While…

  18. HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previousmore » work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.« less

  19. Investigation of automated feature extraction using multiple data sources

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Perkins, Simon J.; Pope, Paul A.; Theiler, James P.; David, Nancy A.; Porter, Reid B.

    2003-04-01

    An increasing number and variety of platforms are now capable of collecting remote sensing data over a particular scene. For many applications, the information available from any individual sensor may be incomplete, inconsistent or imprecise. However, other sources may provide complementary and/or additional data. Thus, for an application such as image feature extraction or classification, it may be that fusing the mulitple data sources can lead to more consistent and reliable results. Unfortunately, with the increased complexity of the fused data, the search space of feature-extraction or classification algorithms also greatly increases. With a single data source, the determination of a suitable algorithm may be a significant challenge for an image analyst. With the fused data, the search for suitable algorithms can go far beyond the capabilities of a human in a realistic time frame, and becomes the realm of machine learning, where the computational power of modern computers can be harnessed to the task at hand. We describe experiments in which we investigate the ability of a suite of automated feature extraction tools developed at Los Alamos National Laboratory to make use of multiple data sources for various feature extraction tasks. We compare and contrast this software's capabilities on 1) individual data sets from different data sources 2) fused data sets from multiple data sources and 3) fusion of results from multiple individual data sources.

  20. Lung fissure detection in CT images using global minimal paths

    NASA Astrophysics Data System (ADS)

    Appia, Vikram; Patil, Uday; Das, Bipul

    2010-03-01

    Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a) intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum energy path between a representative point on the fissure and nearest lung boundary point in this energy domain. The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites. The datasets span through different pathological conditions and varying imaging artifacts.

  1. High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S.; Wardle, John F. C.; Bouman, Katherine L.

    2016-09-01

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.

  2. Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland

    1998-01-01

    Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.

  3. MODIS infrared data applied to Popocatepetl's volcanic clouds

    NASA Astrophysics Data System (ADS)

    Rose, W. I.; Delgado-Granados, H.; Watson, I. M.; Matiella, M. A.; Escobar, D.; Gu, Y.

    2003-04-01

    Popocatepetl volcano, Mexico, has shown diverse activity for the past eight years and has been characterized by strong sulfur dioxide releases and ash eruptions of variable tropospheric height. We have begun study on the eruptive activity of December 2000 and January 2001, when Popocatepetl was showing prominent activity. MODIS data is abundant during this period, and we have applied a variety of algorithms which use infrared channels of MODIS and can potentially map and measure ash size and optical depth [Wen &Rose, 1994, J Geophys Res 99 5421-5431], sulfur dioxide mass [Realmuto et al, 1997, J. Geophys. Res., 102, 15057-15072; Prata et al, in press, AGU Volcanism &Atmosphere Monograph] and sulfate particle size and mass [Yu &Rose, 2000, AGU Monograph 116: 87-100]. Because of variable environmental conditions (clouds, winds) and characteristics of the activity (explosivity and rates of sulfur dioxide and ash releases) the data set studied offers a robust test of the various algorithms, and the data may also be compared to data collected as part of the volcanic monitoring effort, including COSPEC-based sulfur dioxide surveys. The data set will be used to evaluate which algorithms work best in various conditions. At abstract time work on the data is incomplete, but we expect that such data may provide information that is useful to the volcanologists studying Popocatepetl and the people who provide information for ash cloud hazards to aircraft.

  4. Making predictions in a changing world-inference, uncertainty, and learning.

    PubMed

    O'Reilly, Jill X

    2013-01-01

    To function effectively, brains need to make predictions about their environment based on past experience, i.e., they need to learn about their environment. The algorithms by which learning occurs are of interest to neuroscientists, both in their own right (because they exist in the brain) and as a tool to model participants' incomplete knowledge of task parameters and hence, to better understand their behavior. This review focusses on a particular challenge for learning algorithms-how to match the rate at which they learn to the rate of change in the environment, so that they use as much observed data as possible whilst disregarding irrelevant, old observations. To do this algorithms must evaluate whether the environment is changing. We discuss the concepts of likelihood, priors and transition functions, and how these relate to change detection. We review expected and estimation uncertainty, and how these relate to change detection and learning rate. Finally, we consider the neural correlates of uncertainty and learning. We argue that the neural correlates of uncertainty bear a resemblance to neural systems that are active when agents actively explore their environments, suggesting that the mechanisms by which the rate of learning is set may be subject to top down control (in circumstances when agents actively seek new information) as well as bottom up control (by observations that imply change in the environment).

  5. Development of a job rotation scheduling algorithm for minimizing accumulated work load per body parts.

    PubMed

    Song, JooBong; Lee, Chaiwoo; Lee, WonJung; Bahn, Sangwoo; Jung, ChanJu; Yun, Myung Hwan

    2015-01-01

    For the successful implementation of job rotation, jobs should be scheduled systematically so that physical workload is evenly distributed with the use of various body parts. However, while the potential benefits are widely recognized by research and industry, there is still a need for a more effective and efficient algorithm that considers multiple work-related factors in job rotation scheduling. This study suggests a type of job rotation algorithm that aims to minimize musculoskeletal disorders with the approach of decreasing the overall workload. Multiple work characteristics are evaluated as inputs to the proposed algorithm. Important factors, such as physical workload on specific body parts, working height, involvement of heavy lifting, and worker characteristics such as physical disorders, are included in the algorithm. For evaluation of the overall workload in a given workplace, an objective function was defined to aggregate the scores from the individual factors. A case study, where the algorithm was applied at a workplace, is presented with an examination on its applicability and effectiveness. With the application of the suggested algorithm in case study, the value of the final objective function, which is the weighted sum of the workload in various body parts, decreased by 71.7% when compared to a typical sequential assignment and by 84.9% when compared to a single job assignment, which is doing one job all day. An algorithm was developed using the data from the ergonomic evaluation tool used in the plant and from the known factors related to workload. The algorithm was developed so that it can be efficiently applied with a small amount of required inputs, while covering a wide range of work-related factors. A case study showed that the algorithm was beneficial in determining a job rotation schedule aimed at minimizing workload across body parts.

  6. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  7. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less

  8. Algorithm 782 : codes for rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    This article describes a suite of codes as well as associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations of dense matrices. The main contribution is an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy and improved versions of the RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang, respectively, We highlight usage and features of these codes.

  9. Inflammation and hypoxia in the kidney: friends or foes?

    PubMed

    Haase, Volker H

    2015-08-01

    Hypoxic injury is commonly associated with inflammatory-cell infiltration, and inflammation frequently leads to the activation of cellular hypoxia response pathways. The molecular mechanisms underlying this cross-talk during kidney injury are incompletely understood. Yamaguchi and colleagues identify CCAAT/enhancer-binding protein δ as a cytokine- and hypoxia-regulated transcription factor that fine-tunes hypoxia-inducible factor-1 signaling in renal epithelial cells and thus provide a novel molecular link between hypoxia and inflammation in kidney injury.

  10. DO THE RADIOLOGICAL CRITERIA WITH THE USE OF RISK FACTORS IMPACT THE FORECASTING OF ABDOMINAL NEUROBLASTIC TUMOR RESECTION IN CHILDREN?

    PubMed

    Penazzi, Ana Cláudia Soares; Tostes, Vivian Siqueira; Duarte, Alexandre Alberto Barros; Lederman, Henrique Manoel; Caran, Eliana Maria Monteiro; Abib, Simone de Campos Vieira

    2017-01-01

    The treatment of neuroblastoma is dependent on exquisite staging; is performed postoperatively and is dependent on the surgeon's expertise. The use of risk factors through imaging on diagnosis appears as predictive of resectability, complications and homogeneity in staging. To evaluate the traditional resectability criteria with the risk factors for resectability, through the radiological images, in two moments: on diagnosis and in pre-surgical phase. Were analyzed the resectability, surgical complications and relapse rate. Retrospective study of 27 children with abdominal and pelvic neuroblastoma stage 3 and 4, with tomography and/or resonance on the diagnosis and pre-surgical, identifying the presence of risk factors. The mean age of the children was 2.5 years at diagnosis, where 55.6% were older than 18 months, 51.9% were girls and 66.7% were in stage 4. There was concordance on resectability of the tumor by both methods (INSS and IDRFs) at both moments of the evaluation, at diagnosis (p=0.007) and post-chemotherapy (p=0.019); In this way, all resectable patients by IDRFs in the post-chemotherapy had complete resection, and the unresectable ones, 87.5% incomplete. There was remission in 77.8%, 18.5% relapsed and 33.3% died. Resectability was similar in both methods at both pre-surgical and preoperative chemotherapy; preoperative chemotherapy increased resectability and decreased number of risk factors, where the presence of at least one IDRF was associated with incomplete resections and surgical complications; relapses were irrelevant.

  11. Medical cost analysis: application to colorectal cancer data from the SEER Medicare database.

    PubMed

    Bang, Heejung

    2005-10-01

    Incompleteness is a key feature of most survival data. Numerous well established statistical methodologies and algorithms exist for analyzing life or failure time data. However, induced censorship invalidates the use of those standard analytic tools for some survival-type data such as medical costs. In this paper, some valid methods currently available for analyzing censored medical cost data are reviewed. Some cautionary findings under different assumptions are envisioned through application to medical costs from colorectal cancer patients. Cost analysis should be suitably planned and carefully interpreted under various meaningful scenarios even with judiciously selected statistical methods. This approach would be greatly helpful to policy makers who seek to prioritize health care expenditures and to assess the elements of resource use.

  12. Detection of buried objects by fusing dual-band infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.

    1993-11-01

    We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less

  13. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  14. Reconstructing householder vectors from Tall-Skinny QR

    DOE PAGES

    Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...

    2015-08-05

    The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less

  15. Comparison of reporting phase I trial results in ClinicalTrials.gov and matched publications.

    PubMed

    Shepshelovich, D; Goldvaser, H; Wang, L; Abdul Razak, A R; Bedard, P L

    2017-12-01

    Background Data on completeness of reporting of phase I cancer clinical trials in publications are lacking. Methods The ClinicalTrials.gov database was searched for completed adult phase I cancer trials with reported results. PubMed was searched for matching primary publications published prior to November 1, 2016. Reporting in primary publications was compared with the ClinicalTrials.gov database using a 28-point score (2=complete; 1=partial; 0=no reporting) for 14 items related to study design, outcome measures and safety profile. Inconsistencies between primary publications and ClinicalTrials.gov were recorded. Linear regression was used to identify factors associated with incomplete reporting. Results After a review of 583 trials in ClinicalTrials.gov , 163 matching primary publications were identified. Publications reported outcomes that did not appear in ClinicalTrials.gov in 25% of trials. Outcomes were upgraded, downgraded or omitted in publications in 47% of trials. The overall median reporting score was 23/28 (interquartile range 21-25). Incompletely reported items in >25% publications were: inclusion criteria (29%), primary outcome definition (26%), secondary outcome definitions (53%), adverse events (71%), serious adverse events (80%) and dates of study start and database lock (91%). Higher reporting scores were associated with phase I (vs phase I/II) trials (p<0.001), multicenter trials (p<0.001) and publication in journals with lower impact factor (p=0.004). Conclusions Reported results in primary publications for early phase cancer trials are frequently inconsistent or incomplete compared with ClinicalTrials.gov entries. ClinicalTrials.gov may provide more comprehensive data from new cancer drug trials.

  16. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  17. Block LU factorization

    NASA Technical Reports Server (NTRS)

    Demmel, James W.; Higham, Nicholas J.; Schreiber, Robert S.

    1992-01-01

    Many of the currently popular 'block algorithms' are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstable in general. It is shown here that block LU factorization is stable if A is block diagonally dominant by columns. Moreover, for a general matrix the level of instability in block LU factorization can be founded in terms of the condition number kappa(A) and the growth factor for Gaussian elimination without pivoting. A consequence is that block LU factorization is stable for a matrix A that is symmetric positive definite or point diagonally dominant by rows or columns as long as A is well-conditioned.

  18. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  19. Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean

    NASA Astrophysics Data System (ADS)

    Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.

    2010-12-01

    Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.

  20. Factors Influencing Enrolment: A Case Study from Birth to Twenty, the 1990 Birth Cohort in Soweto-Johannesburg

    ERIC Educational Resources Information Center

    Richter, Linda M.; Panday, Saadhna; Norris, Shane A.

    2009-01-01

    Longitudinal studies offer significant advantages in rendering data commensurate with the complexity of human development. However, incomplete enrolment and attrition over time can introduce bias. Furthermore, there is a scarcity of evaluative information on cohorts in developing countries. This paper documents various strategies adopted to…

  1. Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2017-01-01

    Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…

  2. SOURCE SAMPLING FINE PARTICULATE MATTER--INSTITUTIONAL OIL-FIRED BOILER

    EPA Science Inventory

    EPA seeks to understand the correlation between ambient fine PM and adverse human health effects, and there are no reliable emission factors to use for estimating PM2.5 or NH3. The most common source of directly emitted PM2.5 is incomplete combustion of fossil or biomass fuels. M...

  3. Adolescent immunization rates and the effect of socio-demographic factors on immunization in a cosmopolitan city (ERZURUM) in the eastern Turkey.

    PubMed

    Alp, Handan; Altinkaynak, Sevin; Arikan, Duygu; Ozyazicioğlu, Nurcan

    2006-04-01

    Pediatric vaccinations have decreased the incidence and mortality from infectious diseases in children, but adolescents continue to be adversely affected by vaccine preventable disease. The present study was performed to determine the status of adolescents immunization and to investigate the effect of several socio-demographic factors on immunization. Using the cluster-sampling method, 817 adolescents were selected in 24 high schools (15,000 students) in central district of Erzurum (Turkey). Adolescents were categorized as completely vaccinated, incompletely vaccinated, unvaccinated or vaccination status unknown. Of the 817 adolescents, 6.9% were completely vaccinated, 24.4% were incompletely vaccinated and 64.1% were unvaccinated. The vaccination status of 4.6% of adolescents was unknown. A significantly correlation was seen between the number of siblings, the level of mother and father education, the level of parent's socio-economics status, health insurance and immunization status. Our findings indicated a small percentage of adolescents receive all of the recommended vaccine. In immunization programs in Turkey, priority should be given to increase adolescent immunization rate with a middle school and/or adolescents, vaccination.

  4. Comparison of Implicit Schemes for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1995-01-01

    For a computational flow simulation tool to be useful in a design environment, it must be very robust and efficient. To develop such a tool for incompressible flow applications, a number of different implicit schemes are compared for several two-dimensional flow problems in the current study. The schemes include Point-Jacobi relaxation, Gauss-Seidel line relaxation, incomplete lower-upper decomposition, and the generalized minimum residual method preconditioned with each of the three other schemes. The efficiency of the schemes is measured in terms of the computing time required to obtain a steady-state solution for the laminar flow over a backward-facing step, the flow over a NACA 4412 airfoil, and the flow over a three-element airfoil using overset grids. The flow solver used in the study is the INS2D code that solves the incompressible Navier-Stokes equations using the method of artificial compressibility and upwind differencing of the convective terms. The results show that the generalized minimum residual method preconditioned with the incomplete lower-upper factorization outperforms all other methods by at least a factor of 2.

  5. Appointment "no-shows" are an independent predictor of subsequent quality of care and resource utilization outcomes.

    PubMed

    Hwang, Andrew S; Atlas, Steven J; Cronin, Patrick; Ashburner, Jeffrey M; Shah, Sachin J; He, Wei; Hong, Clemens S

    2015-10-01

    Identifying individuals at high risk for suboptimal outcomes is an important goal of healthcare delivery systems. Appointment no-shows may be an important risk predictor. To test the hypothesis that patients with a high propensity to "no-show" for appointments will have worse clinical and acute care utilization outcomes compared to patients with a lower propensity. We calculated the no-show propensity factor (NSPF) for patients of a large academic primary care network using 5 years of outpatient appointment data. NSPF corrects for patients with fewer appointments to avoid over-weighting of no-show visits in such patients. We divided patients into three NSPF risk groups and evaluated the association between NSPF and clinical and acute care utilization outcomes after adjusting for baseline patient characteristics. A total of 140,947 patients who visited a network practice from January 1, 2007, through December 31, 2009, and were either connected to a primary care physician or to a primary care practice, based on a previously validated algorithm. Outcomes of interest were incomplete colorectal, cervical, and breast cancer screening, and above-goal hemoglobin A1c (HbA1c) and low-density lipoprotein (LDL) levels at 1-year follow-up, and hospitalizations and emergency department visits in the subsequent 3 years. Compared to patients in the low NSPF group, patients in the high NSPF group (n=14,081) were significantly more likely to have incomplete preventive cancer screening (aOR 2.41 [2.19-.66] for colorectal, aOR 1.85 [1.65-.08] for cervical, aOR 2.93 [2.62-3.28] for breast cancer), above-goal chronic disease control measures (aOR 2.64 [2.22-3.14] for HbA1c, aOR 1.39 [1.15-1.67] for LDL], and increased rates of acute care utilization (aRR 1.37 [1.31-1.44] for hospitalization, aRR 1.39 [1.35-1.43] for emergency department visits). NSPF is an independent predictor of suboptimal primary care outcomes and acute care utilization. NSPF may play an important role in helping healthcare systems identify high-risk patients.

  6. Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging

    PubMed Central

    Guo, Qijia; Wang, Jie; Chang, Tianying

    2017-01-01

    The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083

  7. Conserved patterns of incomplete reporting in pre-vaccine era childhood diseases

    PubMed Central

    Gunning, Christian E.; Erhardt, Erik; Wearing, Helen J.

    2014-01-01

    Incomplete observation is an important yet often neglected feature of observational ecological timeseries. In particular, observational case report timeseries of childhood diseases have played an important role in the formulation of mechanistic dynamical models of populations and metapopulations. Yet to our knowledge, no comprehensive study of childhood disease reporting probabilities (commonly referred to as reporting rates) has been conducted to date. Here, we provide a detailed analysis of measles and whooping cough reporting probabilities in pre-vaccine United States cities and states, as well as measles in cities of England and Wales. Overall, we find the variability between locations and diseases greatly exceeds that between methods or time periods. We demonstrate a strong relationship within location between diseases and within disease between geographical areas. In addition, we find that demographic covariates such as ethnic composition and school attendance explain a non-trivial proportion of reporting probability variation. Overall, our findings show that disease reporting is both variable and non-random and that completeness of reporting is influenced by disease identity, geography and socioeconomic factors. We suggest that variations in incomplete observation can be accounted for and that doing so can reveal ecologically important features that are otherwise obscured. PMID:25232131

  8. Studying hardness, workability and minimum bending radius in selectively laser-sintered Ti–6Al–4V alloy samples

    NASA Astrophysics Data System (ADS)

    Galkina, N. V.; Nosova, Y. A.; Balyakin, A. V.

    2018-03-01

    This research is relevant as it tries to improve the mechanical and service performance of the Ti–6Al–4V titanium alloy obtained by selective laser sintering. For that purpose, sintered samples were annealed at 750 and 850°C for an hour. Sintered and annealed samples were tested for hardness, workability and microstructure. It was found that incomplete annealing of selectively laser-sintered Ti–6Al–4V samples results in an insignificant reduction in hardness and ductility. Sintered and incompletely annealed samples had a hardness of 32..33 HRC, which is lower than the value of annealed parts specified in standards. Complete annealing at temperature 850°C reduces the hardness to 25 HRC and ductility by 15...20%. Incomplete annealing lowers the ductility factor from 0.08 to 0.06. Complete annealing lowers that value to 0.025. Complete annealing probably results in the embrittlement of sintered samples, perhaps due to their oxidation and hydrogenation in the air. Optical metallography showed lateral fractures in both sintered and annealed samples, which might be the reason why they had lower hardness and ductility.

  9. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  10. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  11. Retention among North American HIV-infected persons in clinical care, 2000-2008.

    PubMed

    Rebeiro, Peter; Althoff, Keri N; Buchacz, Kate; Gill, John; Horberg, Michael; Krentz, Hartmut; Moore, Richard; Sterling, Timothy R; Brooks, John T; Gebo, Kelly A; Hogg, Robert; Klein, Marina; Martin, Jeffrey; Mugavero, Michael; Rourke, Sean; Silverberg, Michael J; Thorne, Jennifer; Gange, Stephen J

    2013-03-01

    Retention in care is key to improving HIV outcomes. The goal of this study was to describe 'churn' in patterns of entry, exit, and retention in HIV care in the United States and Canada. Adults contributing ≥1 CD4 count or HIV-1 RNA (HIV-lab) from 2000 to 2008 in North American AIDS Cohort Collaboration on Research and Design clinical cohorts were included. Incomplete retention was defined as lack of 2 HIV-laboratories (≥90 days apart) within 12 months, summarized by calendar year. Beta-binomial regression models were used to estimate adjusted odds ratios (OR) and 95% confidence intervals (CI) of factors associated with incomplete retention. Among 61,438 participants, 15,360 (25%) with incomplete retention significantly differed in univariate analyses (P < 0.001) from 46,078 (75%) consistently retained by age, race/ethnicity, HIV risk, CD4, antiretroviral therapy use, and country of care (United States vs. Canada). From 2000 to 2004, females (OR = 0.82, CI: 0.70 to 0.95), older individuals (OR = 0.78, CI: 0.74 to 0.83 per 10 years), and antiretroviral therapy users (OR = 0.61, CI: 0.54 to 0.68 vs. all others) were less likely to have incomplete retention, whereas black individuals (OR = 1.31, CI: 1.16 to 1.49, vs. white), those with injection drug use HIV risk (OR = 1.68, CI: 1.49 to 1.89, vs. noninjection drug use), and those in care longer (OR = 1.09, CI: 1.07 to 1.11 per year) were more likely to have incomplete retention. Results from 2005 to 2008 were similar. From 2000 to 2008, 75% of the North American AIDS Cohort Collaboration on Research and Design population was consistently retained in care with 25% experiencing some changes in status or churn. In addition to the programmatic and policy implications, the findings of this study identify patient groups who may benefit from focused retention efforts.

  12. Retention Among North American HIV–infected Persons in Clinical Care, 2000–2008

    PubMed Central

    Rebeiro, Peter; Althoff, Keri N.; Buchacz, Kate; Gill, M. John; Horberg, Michael; Krentz, Hartmut; Moore, Richard; Sterling, Timothy R.; Brooks, John T.; Gebo, Kelly A.; Hogg, Robert; Klein, Marina; Martin, Jeffrey; Mugavero, Michael; Rourke, Sean; Silverberg, Michael J.; Thorne, Jennifer; Gange, Stephen J.

    2013-01-01

    Background Retention in care is key to improving HIV outcomes. Our goal was to describe “churn” in patterns of entry, exit, and retention in HIV care in the US and Canada. Methods Adults contributing ≥1 CD4 count or HIV-1 RNA (HIV-lab) from 2000–2008 in North American Cohort Collaboration on Research and Design (NA-ACCORD) clinical cohorts were included. Incomplete retention was defined as lack of 2 HIV-labs (≥90 days apart) within 12 months, summarized by calendar year. We used beta-binomial regression models to estimate adjusted odds ratios (OR) and 95% confidence intervals (CI) of factors associated with incomplete retention. Results Among 61,438 participants, 15,360 (25%) with incomplete retention significantly differed in univariate analyses (p<0.001) from 46,078 (75%) consistently retained by age, race/ethnicity, HIV risk, CD4, ART use, and country of care (US vs. Canada). From 2000–2004, females (OR=0.82, CI:0.70–0.95), older individuals (OR=0.78, CI:0.74–0.83 per 10 years), and ART users (OR= 0.61, CI:0.54–0.68 vs all others) were less likely to have incomplete retention, while black individuals (OR=1.31, CI:1.16–1.49, vs. white), those with injection drug use (IDU) HIV risk (OR=1.68, CI:1.49–1.89, vs. non-IDU) and those in care longer (OR=1.09, CI:1.07–1.11 per year) were more likely to have incomplete retention. Results from 2005–2008 were similar. Discussion From 2000 to 2008, 75% of the NA-ACCORD population was consistently retained in care with 25% experiencing some change in status, or churn. In addition to the programmatic and policy implications, our findings identify patient groups who may benefit from focused retention efforts. PMID:23242158

  13. Structure-preserving and rank-revealing QR-factorizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Hansen, P.C.

    1991-11-01

    The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Thomas E., E-mail: thomas.merchant@stjude.or; Chitti, Ramana M.; Li Chenghong

    Purpose: To identify risk factors associated with incomplete neurological recovery in pediatric patients with infratentorial ependymoma treated with postoperative conformal radiation therapy (CRT). Methods: The study included 68 patients (median age +- standard deviation of 2.6 +- 3.8 years) who were followed for 5 years after receiving CRT (54-59.4 Gy) and were assessed for function of cranial nerves V to VII and IX to XII, motor weakness, and dysmetria. The mean (+- standard deviation) brainstem dose was 5,487 (+-464) cGy. Patients were divided into four groups representing those with normal baseline and follow-up, those with abnormal baseline and full recovery,more » those with abnormal baseline and partial or no recovery, and those with progressive deficits at 12 (n = 62 patients), 24 (n = 57 patients), and 60 (n = 50 patients) months. Grouping was correlated with clinical and treatment factors. Results: Risk factors (overall risk [OR], p value) associated with incomplete recovery included gender (male vs. female, OR = 3.97, p = 0.036) and gross tumor volume (GTV) (OR/ml = 1.23, p = 0.005) at 12 months, the number of resections (>1 vs. 1; OR = 23.7, p = 0.003) and patient age (OR/year = 0.77, p = 0.029) at 24 months, and cerebrospinal fluid (CSF) shunting (Yes vs. No; OR = 21.9, p = 0.001) and GTV volume (OR/ml = 1.18, p = 0.008) at 60 months. An increase in GTV correlated with an increase in the number of resections (p = 0.001) and CSF shunting (p = 0.035); the number of resections correlated with CSF shunting (p < 0.0001), and male patients were more likely to undergo multiple tumor resections (p = 0.003). Age correlated with brainstem volume (p < 0.0001). There were no differences in outcome based on the absolute or relative volume of the brainstem that received more than 54 Gy. Conclusions: Incomplete recovery of brainstem function after CRT for infratentorial ependymoma is related to surgical morbidity and the volume and the extent of tumor.« less

  15. Progressive Occlusion of Small Saccular Aneurysms Incompletely Occluded After Stent-Assisted Coil Embolization : Analysis of Related Factors and Long-Term Outcomes.

    PubMed

    Lim, Jeong Wook; Lee, Jeongjun; Cho, Young Dae

    2017-08-08

    Incompletely occluded aneurysms after coil embolization are subject to recanalization but occasionally progress to a totally occluded state. Deployed stents may actually promote thrombosis of coiled aneurysms. We evaluated outcomes of small aneurysms (<10 mm) wherein saccular filling with contrast medium was evident after stent-assisted coiling, assessing factors implicated in subsequent progressive occlusion. Between September 2012 and June 2016, a total of 463 intracranial aneurysms were treated by stent-assisted coil embolization. Of these, 132 small saccular aneurysms displayed saccular filling with contrast medium in the immediate aftermath of coiling. Progressive thrombosis was defined as complete aneurysmal occlusion at the 6‑month follow-up point. Rates of progressive occlusion and factors predisposing to this were analyzed via binary logistic regression. In 101 (76.5%) of the 132 intracranial aneurysms, complete occlusion was observed in follow-up imaging studies at 6 months. Binary logistic regression analysis indicated that progressive occlusion was linked to smaller neck diameter (odds ratio [OR] = 1.533; p = 0.003), hyperlipidemia (OR = 3.329; p = 0.036) and stent type (p = 0.031). The LVIS stent is especially susceptible to progressive thrombosis, more so than Neuroform (OR = 0.098; p = 0.008) or Enterprise (OR = 0.317; p = 0.098) stents. In 57 instances of progressive thrombosis, followed for ≥12 months (mean 25.0 ± 10.7 months), 56 (98.2%) were stable, with minor recanalization noted once (1.8%) and no major recanalization. Aneurysms associated with smaller diameter necks, hyperlipidemic states and LVIS stent deployment may be inclined to possible thrombosis, if occlusion immediately after stent-assisted coil embolization is incomplete. In such instances, excellent long-term durability is anticipated.

  16. Vaccine coverage and determinants of incomplete vaccination in children aged 12-23 months in Dschang, West Region, Cameroon: a cross-sectional survey during a polio outbreak.

    PubMed

    Russo, Gianluca; Miglietta, Alessandro; Pezzotti, Patrizio; Biguioh, Rodrigue Mabvouna; Bouting Mayaka, Georges; Sobze, Martin Sanou; Stefanelli, Paola; Vullo, Vincenzo; Rezza, Giovanni

    2015-07-10

    Inadequate immunization coverage with increased risk of vaccine preventable diseases outbreaks remains a problem in Africa. Moreover, different factors contribute to incomplete vaccination status. This study was performed in Dschang (West Region, Cameroon), during the polio outbreak occurred in October 2013, in order to estimate the immunization coverage among children aged 12-23 months, to identify determinants for incomplete vaccination status and to assess the risk of poliovirus spread in the study population. A cross-sectional household survey was conducted in November-December 2013, using the WHO two-stage sampling design. An interviewer-administered questionnaire was used to obtain information from consenting parents of children aged 12-23 months. Vaccination coverage was assessed by vaccination card and parents' recall. Chi-square test and multilevel logistic regression model were used to identify the determinants of incomplete immunization status. Statistical significance was set at p < 0.05. Overall, 3248 households were visited and 502 children were enrolled. Complete immunization coverage was 85.9% and 84.5%, according to card plus parents' recall and card only, respectively. All children had received at least one routine vaccination, the OPV-3 (Oral Polio Vaccine) coverage was >90%, and 73.4% children completed the recommended vaccinations before 1-year of age. In the final multilevel logistic regression model, factors significantly associated with incomplete immunization status were: retention of immunization card (AOR: 7.89; 95% CI: 1.08-57.37), lower mothers' utilization of antenatal care (ANC) services (AOR:1.25; 95% CI: 1.07-63.75), being the ≥ 3(rd) born child in the family (AOR: 425.4; 95% CI: 9.6-18,808), younger mothers' age (AOR: 49.55; 95% CI: 1.59-1544), parents' negative attitude towards immunization (AOR: 20.2; 95% CI: 1.46-278.9), and poorer parents' exposure to information on vaccination (AOR: 28.07; 95 % CI: 2.26-348.1). Longer distance from the vaccination centers was marginally significant (p = 0.05). Vaccination coverage was high; however, 1 out of 7 children was partially vaccinated, and 1 out of 4 did not complete timely the recommended vaccinations. In order to improve the immunization coverage, it is necessary to strengthen ANC services, and to improve parents' information and attitude towards immunization, targeting younger parents and families living far away from vaccination centers, using appropriate communication strategies. Finally, the estimated OPV-3 coverage is reassuring in relation to the ongoing polio outbreak.

  17. A VLSI pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.

    1986-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  18. Autosomal dominant juvenile recurrent parotitis.

    PubMed Central

    Reid, E; Douglas, F; Crow, Y; Hollman, A; Gibson, J

    1998-01-01

    Juvenile recurrent parotitis is a common cause of inflammatory salivary gland swelling in children. A variety of aetiological factors has been proposed for the condition. Here we present a family where four members had juvenile recurrent parotitis and where two other family members may have had an atypical form of the condition. The segregation pattern in the family is consistent with autosomal dominant inheritance with incomplete penetrance and this suggests that, at least in some cases, genetic factors may be implicated in juvenile recurrent parotitis. PMID:9610807

  19. A pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun

    1988-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  20. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors.

    PubMed

    Adewumi, Aderemi Oluyinka; Chetty, Sivashan

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.

  1. Some Families of the Incomplete H-Functions and the Incomplete \\overline H -Functions and Associated Integral Transforms and Operators of Fractional Calculus with Applications

    NASA Astrophysics Data System (ADS)

    Srivastava, H. M.; Saxena, R. K.; Parmar, R. K.

    2018-01-01

    Our present investigation is inspired by the recent interesting extensions (by Srivastava et al. [35]) of a pair of the Mellin-Barnes type contour integral representations of their incomplete generalized hypergeometric functions p γ q and p Γ q by means of the incomplete gamma functions γ( s, x) and Γ( s, x). Here, in this sequel, we introduce a family of the relatively more general incomplete H-functions γ p,q m,n ( z) and Γ p,q m,n ( z) as well as their such special cases as the incomplete Fox-Wright generalized hypergeometric functions p Ψ q (γ) [ z] and p Ψ q (Γ) [ z]. The main object of this paper is to study and investigate several interesting properties of these incomplete H-functions, including (for example) decomposition and reduction formulas, derivative formulas, various integral transforms, computational representations, and so on. We apply some substantially general Riemann-Liouville and Weyl type fractional integral operators to each of these incomplete H-functions. We indicate the easilyderivable extensions of the results presented here that hold for the corresponding incomplete \\overline H -functions as well. Potential applications of many of these incomplete special functions involving (for example) probability theory are also indicated.

  2. Influence of tumor cell proliferation and sex-hormone receptors on effectiveness of radiation therapy for dogs with incompletely resected meningiomas.

    PubMed

    Théon, A P; Lecouteur, R A; Carr, E A; Griffey, S M

    2000-03-01

    To assess the influence of tumor cell proliferation and sex-hormone receptors on the efficacy of megavoltage irradiation for dogs with incompletely resected meningiomas. Longitudinal clinical trial. 20 dogs with incompletely resected intracranial meningiomas. Dogs were treated with 48 Gy of radiation administered 3 times per week on an alternate-day schedule of 4 Gy/fraction for 4 weeks, using bilateral parallel-opposed fields. Tumor proliferative fraction measured by immunohistochemical detection of proliferating cell nuclear antigen (PFPCNA index) ranged from 10 to 42% (median, 24%). Progesterone receptor immunoreactivity was detected in 70% of tumors. Estrogen receptor immunoreactivity was not detected. An inverse correlation was found between detection of progesterone receptors and the PFPCNA index. The overall 2-year progression-free survival (PFS) rate was 68%. The only prognostic factor that significantly affected PFS rate was the PFPCNA index. The 2-year PFS was 42% for tumors with a high PFPCNA index (value > or = 24%) and 91% for tumors with a low PFPCNA index (value < 24%). Tumors with a high PFPCNA index were 9.1 times as likely to recur as were tumors with a low PFPCNA index. This study confirms the value of irradiation for dogs with incompletely resected meningiomas. Prognostic value of the PFPCNA index suggests-that duration of treatment and interval from surgery to start of irradiation may affect outcome. Loss of progesterone receptors in some tumors may be responsible for an increase in PFPCNA index and may indirectly affect prognosis after radiation therapy.

  3. Structural adjustment for accurate conditioning in large-scale subsurface systems

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2017-03-01

    Most of the current subsurface simulation approaches consider a priority list for honoring the well and any other auxiliary data, and eventually adopt a middle ground between the quality of the model and conditioning it to hard data. However, as the number of datasets increases, such methods often produce undesirable features in the subsurface model. Due to their high flexibility, subsurface modeling based on training images (TIs) is becoming popular. Providing comprehensive TIs remains, however, an outstanding problem. In addition, identifying a pattern similar to those in the TI that honors the well and other conditioning data is often difficult. Moreover, the current subsurface modeling approaches do not account for small perturbations that may occur in a subsurface system. Such perturbations are active in most of the depositional systems. In this paper, a new methodology is presented that is based on an irregular gridding scheme that accounts for incomplete TIs and minor offsets. Use of the methodology enables one to use a small or incomplete TI and adaptively change the patterns in the simulation grid in order to simultaneously honor the well data and take into account the effect of the local offsets. Furthermore, the proposed method was used on various complex process-based models and their structures are deformed for matching with the conditioning point data. The accuracy and robustness of the proposed algorithm are successfully demonstrated by applying it to models of several complex examples.

  4. In-lab versus at-home activity recognition in ambulatory subjects with incomplete spinal cord injury.

    PubMed

    Albert, Mark V; Azeze, Yohannes; Courtois, Michael; Jayaraman, Arun

    2017-02-06

    Although commercially available activity trackers can aid in tracking therapy and recovery of patients, most devices perform poorly for patients with irregular movement patterns. Standard machine learning techniques can be applied on recorded accelerometer signals in order to classify the activities of ambulatory subjects with incomplete spinal cord injury in a way that is specific to this population and the location of the recording-at home or in the clinic. Subjects were instructed to perform a standardized set of movements while wearing a waist-worn accelerometer in the clinic and at-home. Activities included lying, sitting, standing, walking, wheeling, and stair climbing. Multiple classifiers and validation methods were used to quantify the ability of the machine learning techniques to distinguish the activities recorded in-lab or at-home. In the lab, classifiers trained and tested using within-subject cross-validation provided an accuracy of 91.6%. When the classifier was trained on data collected in the lab but tested on at home data, the accuracy fell to 54.6% indicating distinct movement patterns between locations. However, the accuracy of the at-home classifications, when training the classifier with at-home data, improved to 85.9%. Individuals with unique movement patterns can benefit from using tailored activity recognition algorithms easily implemented using modern machine learning methods on collected movement data.

  5. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  6. Estimating hybridization in the presence of coalescence using phylogenetic intraspecific sampling.

    PubMed

    Gerard, David; Gibbs, H Lisle; Kubatko, Laura

    2011-10-06

    A well-known characteristic of multi-locus data is that each locus has its own phylogenetic history which may differ substantially from the overall phylogenetic history of the species. Although the possibility that this arises through incomplete lineage sorting is often incorporated in models for the species-level phylogeny, it is much less common for hybridization to also be formally included in such models. We have modified the evolutionary model of Meng and Kubatko (2009) to incorporate intraspecific sampling of multiple individuals for estimation of speciation times and times of hybridization events for testing for hybridization in the presence of incomplete lineage sorting. We have also utilized a more efficient algorithm for obtaining our estimates. Using simulations, we demonstrate that our approach performs well under conditions motivated by an empirical data set for Sistrurus rattlesnakes where putative hybridization has occurred. We further demonstrate that the method is able to accurately detect the signature of hybridization in the data, while this signal may be obscured when other species-tree inference methods that ignore hybridization are used. Our approach is shown to be powerful in detecting hybridization when it is present. When applied to the Sistrurus data, we find no evidence of hybridization; instead, it appears that putative hybrid snakes in Missouri are most likely pure S. catenatus tergeminus in origin, which has significant conservation implications.

  7. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  8. Intelligent diagnosis of jaundice with dynamic uncertain causality graph model.

    PubMed

    Hao, Shao-Rui; Geng, Shi-Chao; Fan, Lin-Xiao; Chen, Jia-Jia; Zhang, Qin; Li, Lan-Juan

    2017-05-01

    Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A "chaining" inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure.

  9. Intelligent diagnosis of jaundice with dynamic uncertain causality graph model*

    PubMed Central

    Hao, Shao-rui; Geng, Shi-chao; Fan, Lin-xiao; Chen, Jia-jia; Zhang, Qin; Li, Lan-juan

    2017-01-01

    Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A “chaining” inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure. PMID:28471111

  10. Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient

    PubMed Central

    Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming

    2016-01-01

    This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262

  11. A Controlled Study of the Effectiveness of an Adaptive Closed-Loop Algorithm to Minimize Corticosteroid-Induced Stress Hyperglycemia in Type 1 Diabetes

    PubMed Central

    Youssef, Joseph El; Castle, Jessica R; Branigan, Deborah L; Massoud, Ryan G; Breen, Matthew E; Jacobs, Peter G; Bequette, B Wayne; Ward, W Kenneth

    2011-01-01

    To be effective in type 1 diabetes, algorithms must be able to limit hyperglycemic excursions resulting from medical and emotional stress. We tested an algorithm that estimates insulin sensitivity at regular intervals and continually adjusts gain factors of a fading memory proportional-derivative (FMPD) algorithm. In order to assess whether the algorithm could appropriately adapt and limit the degree of hyperglycemia, we administered oral hydrocortisone repeatedly to create insulin resistance. We compared this indirect adaptive proportional-derivative (APD) algorithm to the FMPD algorithm, which used fixed gain parameters. Each subject with type 1 diabetes (n = 14) was studied on two occasions, each for 33 h. The APD algorithm consistently identified a fall in insulin sensitivity after hydrocortisone. The gain factors and insulin infusion rates were appropriately increased, leading to satisfactory glycemic control after adaptation (premeal glucose on day 2, 148 ± 6 mg/dl). After sufficient time was allowed for adaptation, the late postprandial glucose increment was significantly lower than when measured shortly after the onset of the steroid effect. In addition, during the controlled comparison, glycemia was significantly lower with the APD algorithm than with the FMPD algorithm. No increase in hypoglycemic frequency was found in the APD-only arm. An afferent system of duplicate amperometric sensors demonstrated a high degree of accuracy; the mean absolute relative difference of the sensor used to control the algorithm was 9.6 ± 0.5%. We conclude that an adaptive algorithm that frequently estimates insulin sensitivity and adjusts gain factors is capable of minimizing corticosteroid-induced stress hyperglycemia. PMID:22226248

  12. Failsafe modes in incomplete minority game

    NASA Astrophysics Data System (ADS)

    Yao, Xiaobo; Wan, Shaolong; Chen, Wen

    2009-09-01

    We make a failsafe extension to the incomplete minority game model, give a brief analysis on how incompleteness will effect system efficiency. Simulations that limited incompleteness in strategies can improve the system efficiency. Among three failsafe modes, the “Back-to-Best” mode brings most significant improvement and keeps the system efficiency in a long range of incompleteness. A simple analytic formula has a trend which matches simulation results. The IMMG model is used to study the effect of distribution, and we find that there is one junction point in each series of curves, at which system efficiency is not influenced by the distribution of incompleteness. When pIbar > the concentration of incompleteness weakens the effect. On the other side of , concentration will be helpful. When pI is close to zero agents using incomplete strategies have on average better profits than those using standard strategies, and the “Back-to-Best” agents have a wider range of pI to win.

  13. Factors Associated with Incomplete Reporting of HIV and AIDS by Uganda's Surveillance System

    ERIC Educational Resources Information Center

    Akankunda, Denis B.

    2014-01-01

    Background: Over the last 20 years, Uganda has piloted and implemented various management information systems (MIS) for better surveillance of HIV/AIDS. With support from the United States Government, Uganda introduced the District Health Information Software 2 (DHIS2) in 2012. However, districts have yet to fully adapt to this system given a…

  14. Regional data to support biodiversity assessments: terrestrial vertebrate and butterfly data from the Southwest

    Treesearch

    Darren J. Bender; Curtis H. Flather; Kenneth R. Wilson; Gordon C. Reese

    2005-01-01

    Spatially explicit data on the location of species across broad geographic areas greatly facilitate effective conservation planning on lands managed for multiple uses. The importance of these data notwithstanding, our knowledge about the geography of biodiversity is remarkably incomplete. An important factor contributing to our ignorance is that much of the...

  15. A Cell Size Theory of Aging.

    PubMed

    Patra, Krushna C; Bardeesy, Nabeel

    2018-06-18

    The factors determining longevity of different animals are incompletely defined. In this issue of Developmental Cell, Anzi et al. (2018) show that distinct strategies for postnatal pancreatic growth operate in different mammals and correlate with lifespan, with short-lived species exhibiting increasing pancreatic cell size and long-lived animals increasing cell number. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. A Cross-Site Comparison of Factors Influencing Soil Nitrification Rates in Northeastern USA Forested Watersheds

    Treesearch

    Donald S. Ross; Beverley C. Wemple; Austin E. Jamison; Guinevere Fredriksen; James B. Shanley; Gregory B. Lawrence; Scott W. Bailey; John L. Campbell

    2009-01-01

    Elevated N deposition is continuing on many forested landscapes around the world and our understanding of ecosystem response is incomplete. Soil processes, especially nitrification, are critical. Many studies of soil N transformations have focused on identifying relationships within a single watershed but these results are often not transferable. We studied 10 small...

  17. Maturational and Non-Maturational Factors in Heritage Language Acquisition

    ERIC Educational Resources Information Center

    Moon, Ji Hye

    2012-01-01

    This dissertation aims to understand the maturational and non-maturational aspects of early bilingualism and language attrition in heritage speakers who have acquired their L1 incompletely in childhood. The study highlights the influential role of age and input dynamics in early L1 development, where the timing of reduction in L1 input and the…

  18. Mitigation of adverse interactions in pairs of clinical practice guidelines using constraint logic programming.

    PubMed

    Wilk, Szymon; Michalowski, Wojtek; Michalowski, Martin; Farion, Ken; Hing, Marisela Mainegra; Mohapatra, Subhra

    2013-04-01

    We propose a new method to mitigate (identify and address) adverse interactions (drug-drug or drug-disease) that occur when a patient with comorbid diseases is managed according to two concurrently applied clinical practice guidelines (CPGs). A lack of methods to facilitate the concurrent application of CPGs severely limits their use in clinical practice and the development of such methods is one of the grand challenges for clinical decision support. The proposed method responds to this challenge. We introduce and formally define logical models of CPGs and other related concepts, and develop the mitigation algorithm that operates on these concepts. In the algorithm we combine domain knowledge encoded as interaction and revision operators using the constraint logic programming (CLP) paradigm. The operators characterize adverse interactions and describe revisions to logical models required to address these interactions, while CLP allows us to efficiently solve the logical models - a solution represents a feasible therapy that may be safely applied to a patient. The mitigation algorithm accepts two CPGs and available (likely incomplete) patient information. It reports whether mitigation has been successful or not, and on success it gives a feasible therapy and points at identified interactions (if any) together with the revisions that address them. Thus, we consider the mitigation algorithm as an alerting tool to support a physician in the concurrent application of CPGs that can be implemented as a component of a clinical decision support system. We illustrate our method in the context of two clinical scenarios involving a patient with duodenal ulcer who experiences an episode of transient ischemic attack. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function.

    PubMed

    Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-08-15

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.

  20. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    PubMed

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Network-Based Disease Module Discovery by a Novel Seed Connector Algorithm with Pathobiological Implications.

    PubMed

    Wang, Rui-Sheng; Loscalzo, Joseph

    2018-05-20

    Understanding the genetic basis of complex diseases is challenging. Prior work shows that disease-related proteins do not typically function in isolation. Rather, they often interact with each other to form a network module that underlies dysfunctional mechanistic pathways. Identifying such disease modules will provide insights into a systems-level understanding of molecular mechanisms of diseases. Owing to the incompleteness of our knowledge of disease proteins and limited information on the biological mediators of pathobiological processes, the key proteins (seed proteins) for many diseases appear scattered over the human protein-protein interactome and form a few small branches, rather than coherent network modules. In this paper, we develop a network-based algorithm, called the Seed Connector algorithm (SCA), to pinpoint disease modules by adding as few additional linking proteins (seed connectors) to the seed protein pool as possible. Such seed connectors are hidden disease module elements that are critical for interpreting the functional context of disease proteins. The SCA aims to connect seed disease proteins so that disease mechanisms and pathways can be decoded based on predicted coherent network modules. We validate the algorithm using a large corpus of 70 complex diseases and binding targets of over 200 drugs, and demonstrate the biological relevance of the seed connectors. Lastly, as a specific proof of concept, we apply the SCA to a set of seed proteins for coronary artery disease derived from a meta-analysis of large-scale genome-wide association studies and obtain a coronary artery disease module enriched with important disease-related signaling pathways and drug targets not previously recognized. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E

    NASA Technical Reports Server (NTRS)

    Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie

    2001-01-01

    In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.

  3. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  4. DO THE RADIOLOGICAL CRITERIA WITH THE USE OF RISK FACTORS IMPACT THE FORECASTING OF ABDOMINAL NEUROBLASTIC TUMOR RESECTION IN CHILDREN?

    PubMed Central

    PENAZZI, Ana Cláudia Soares; TOSTES, Vivian Siqueira; DUARTE, Alexandre Alberto Barros; LEDERMAN, Henrique Manoel; CARAN, Eliana Maria Monteiro; ABIB, Simone de Campos Vieira

    2017-01-01

    ABSTRACT Background: The treatment of neuroblastoma is dependent on exquisite staging; is performed postoperatively and is dependent on the surgeon’s expertise. The use of risk factors through imaging on diagnosis appears as predictive of resectability, complications and homogeneity in staging. Aim: To evaluate the traditional resectability criteria with the risk factors for resectability, through the radiological images, in two moments: on diagnosis and in pre-surgical phase. Were analyzed the resectability, surgical complications and relapse rate. Methods: Retrospective study of 27 children with abdominal and pelvic neuroblastoma stage 3 and 4, with tomography and/or resonance on the diagnosis and pre-surgical, identifying the presence of risk factors. Results: The mean age of the children was 2.5 years at diagnosis, where 55.6% were older than 18 months, 51.9% were girls and 66.7% were in stage 4. There was concordance on resectability of the tumor by both methods (INSS and IDRFs) at both moments of the evaluation, at diagnosis (p=0.007) and post-chemotherapy (p=0.019); In this way, all resectable patients by IDRFs in the post-chemotherapy had complete resection, and the unresectable ones, 87.5% incomplete. There was remission in 77.8%, 18.5% relapsed and 33.3% died. Conclusions: Resectability was similar in both methods at both pre-surgical and preoperative chemotherapy; preoperative chemotherapy increased resectability and decreased number of risk factors, where the presence of at least one IDRF was associated with incomplete resections and surgical complications; relapses were irrelevant. PMID:29257841

  5. A new theory of development: the generation of complexity in ontogenesis.

    PubMed

    Barbieri, Marcello

    2016-03-13

    Today there is a very wide consensus on the idea that embryonic development is the result of a genetic programme and of epigenetic processes. Many models have been proposed in this theoretical framework to account for the various aspects of development, and virtually all of them have one thing in common: they do not acknowledge the presence of organic codes (codes between organic molecules) in ontogenesis. Here it is argued instead that embryonic development is a convergent increase in complexity that necessarily requires organic codes and organic memories, and a few examples of such codes are described. This is the code theory of development, a theory that was originally inspired by an algorithm that is capable of reconstructing structures from incomplete information, an algorithm that here is briefly summarized because it makes it intuitively appealing how a convergent increase in complexity can be achieved. The main thesis of the new theory is that the presence of organic codes in ontogenesis is not only a theoretical necessity but, first and foremost, an idea that can be tested and that has already been found to be in agreement with the evidence. © 2016 The Author(s).

  6. Learning in the model space for cognitive fault diagnosis.

    PubMed

    Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin

    2014-01-01

    The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.

  7. Exemplar-based inpainting as a solution to the missing wedge problem in electron tomography.

    PubMed

    Trampert, Patrick; Wang, Wu; Chen, Delei; Ravelli, Raimond B G; Dahmen, Tim; Peters, Peter J; Kübel, Christian; Slusallek, Philipp

    2018-04-21

    A new method for dealing with incomplete projection sets in electron tomography is proposed. The approach is inspired by exemplar-based inpainting techniques in image processing and heuristically generates data for missing projection directions. The method has been extended to work on three dimensional data. In general, electron tomography reconstructions suffer from elongation artifacts along the beam direction. These artifacts can be seen in the corresponding Fourier domain as a missing wedge. The new method synthetically generates projections for these missing directions with the help of a dictionary based approach that is able to convey both structure and texture at the same time. It constitutes a preprocessing step that can be combined with any tomographic reconstruction algorithm. The new algorithm was applied to phantom data, to a real electron tomography data set taken from a catalyst, as well as to a real dataset containing solely colloidal gold particles. Visually, the synthetic projections, reconstructions, and corresponding Fourier power spectra showed a decrease of the typical missing wedge artifacts. Quantitatively, the inpainting method is capable to reduce missing wedge artifacts and improves tomogram quality with respect to full width half maximum measurements. Copyright © 2018. Published by Elsevier B.V.

  8. Minimally invasive myotomy for the treatment of esophageal achalasia: evolution of the surgical procedure and the therapeutic algorithm.

    PubMed

    Bresadola, Vittorio; Feo, Carlo V

    2012-04-01

    Achalasia is a rare disease of the esophagus, characterized by the absence of peristalsis in the esophageal body and incomplete relaxation of the lower esophageal sphincter, which may be hypertensive. The cause of this disease is unknown; therefore, the aim of the therapy is to improve esophageal emptying by eliminating the outflow resistance caused by the lower esophageal sphincter. This goal can be accomplished either by pneumatic dilatation or surgical myotomy, which are the only long-term effective therapies for achalasia. Historically, pneumatic dilatation was preferred over surgical myotomy because of the morbidity associated with a thoracotomy or a laparotomy. However, with the development of minimally invasive techniques, the surgical approach has gained widespread acceptance among patients and gastroenterologists and, consequently, the role of surgery has changed. The aim of this study was to review the changes occurred in the surgical treatment of achalasia over the last 2 decades; specifically, the development of minimally invasive techniques with the evolution from a thoracoscopic approach without an antireflux procedure to a laparoscopic myotomy with a partial fundoplication, the changes in the length of the myotomy, and the modification of the therapeutic algorithm.

  9. Prediction and causal reasoning in planning

    NASA Technical Reports Server (NTRS)

    Dean, T.; Boddy, M.

    1987-01-01

    Nonlinear planners are often touted as having an efficiency advantage over linear planners. The reason usually given is that nonlinear planners, unlike their linear counterparts, are not forced to make arbitrary commitments to the order in which actions are to be performed. This ability to delay commitment enables nonlinear planners to solve certain problems with far less effort than would be required of linear planners. Here, it is argued that this advantage is bought with a significant reduction in the ability of a nonlinear planner to accurately predict the consequences of actions. Unfortunately, the general problem of predicting the consequences of a partially ordered set of actions is intractable. In gaining the predictive power of linear planners, nonlinear planners sacrifice their efficiency advantage. There are, however, other advantages to nonlinear planning (e.g., the ability to reason about partial orders and incomplete information) that make it well worth the effort needed to extend nonlinear methods. A framework is supplied for causal inference that supports reasoning about partially ordered events and actions whose effects depend upon the context in which they are executed. As an alternative to a complete but potentially exponential-time algorithm, researchers provide a provably sound polynomial-time algorithm for predicting the consequences of partially ordered events.

  10. Fast and objective detection and analysis of structures in downhole images

    NASA Astrophysics Data System (ADS)

    Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick

    2017-09-01

    Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.

  11. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    NASA Technical Reports Server (NTRS)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  12. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits.

    PubMed

    Ginde, Adit A; Blanc, Phillip G; Lieberman, Rebecca M; Camargo, Carlos A

    2008-04-01

    Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3). We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64%) cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8), often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2) identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2%) true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86-92) for detecting hypoglycemia visits. The proposed algorithm improves on prior strategies to identify hypoglycemia visits in administrative data sets and will enhance the ability to study the epidemiology and design interventions for this important complication of diabetes care.

  13. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  14. A Deep Learning Algorithm of Neural Network for the Parameterization of Typhoon-Ocean Feedback in Typhoon Forecast Models

    NASA Astrophysics Data System (ADS)

    Jiang, Guo-Qing; Xu, Jing; Wei, Jun

    2018-04-01

    Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.

  15. 49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2014-10-01 2014-10-01 false Requirements for incomplete vehicle manufacturers...

  16. 49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2011-10-01 2011-10-01 false Requirements for incomplete vehicle manufacturers...

  17. 49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2010-10-01 2010-10-01 false Requirements for incomplete vehicle manufacturers...

  18. 49 CFR 529.4 - Requirements for incomplete automobile manufacturers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 6 2012-10-01 2012-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...

  19. 49 CFR 529.4 - Requirements for incomplete automobile manufacturers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...

  20. 49 CFR 529.4 - Requirements for incomplete automobile manufacturers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 6 2014-10-01 2014-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...

  1. 49 CFR 529.4 - Requirements for incomplete automobile manufacturers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 6 2013-10-01 2013-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...

  2. 49 CFR 529.4 - Requirements for incomplete automobile manufacturers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 6 2011-10-01 2011-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...

  3. State of reporting of primary biomedical research: a scoping review protocol

    PubMed Central

    Mbuagbaw, Lawrence; Samaan, Zainab; Jin, Yanling; Nwosu, Ikunna; Levine, Mitchell A H; Adachi, Jonathan D; Thabane, Lehana

    2017-01-01

    Introduction Incomplete or inconsistent reporting remains a major concern in the biomedical literature. Incomplete or inconsistent reporting may yield the published findings unreliable, irreproducible or sometimes misleading. In this study based on evidence from systematic reviews and surveys that have evaluated the reporting issues in primary biomedical studies, we aim to conduct a scoping review with focuses on (1) the state-of-the-art extent of adherence to the emerging reporting guidelines in primary biomedical research, (2) the inconsistency between protocols or registrations and full reports and (3) the disagreement between abstracts and full-text articles. Methods and analyses We will use a comprehensive search strategy to retrieve all available and eligible systematic reviews and surveys in the literature. We will search the following electronic databases: Web of Science, Excerpta Medica Database (EMBASE), MEDLINE and Cumulative Index to Nursing and Allied Health Literature (CINAHL). Our outcomes are levels of adherence to reporting guidelines, levels of consistency between protocols or registrations and full reports and the agreement between abstracts and full reports, all of which will be expressed as percentages, quality scores or categorised rating (such as high, medium and low). No pooled analyses will be performed quantitatively given the heterogeneity of the included systematic reviews and surveys. Likewise, factors associated with improved completeness and consistency of reporting will be summarised qualitatively. The quality of the included systematic reviews will be evaluated using AMSTAR (a measurement tool to assess systematic reviews). Ethics and dissemination All findings will be published in peer-reviewed journals and relevant conferences. These results may advance our understanding of the extent of incomplete and inconsistent reporting, factors related to improved completeness and consistency of reporting and potential recommendations for various stakeholders in the biomedical community. PMID:28360252

  4. Specialist Endoscopists Are Associated with a Decreased Risk of Incomplete Polyp Resection During Endoscopic Mucosal Resection in the Colon.

    PubMed

    Tavakkoli, Anna; Law, Ryan J; Bedi, Aarti O; Prabhu, Anoop; Hiatt, Tadd; Anderson, Michelle A; Wamsteker, Erik J; Elmunzer, B Joseph; Piraka, Cyrus R; Scheiman, James M; Elta, Grace H; Kwon, Richard S

    2017-09-01

    Endoscopic experience is known to correlate with outcomes of endoscopic mucosal resection (EMR), particularly complete resection of the polyp tissue. Whether specialist endoscopists can protect against incomplete polypectomy in the setting of known risk factors for incomplete resection (IR) is unknown. We aimed to characterize how specialist endoscopists may help to mitigate the risk of IR of large sessile polyps. This is a retrospective cohort study of patients who underwent EMR at the University of Michigan from January 1, 2006, to November 15, 2015. The primary outcome was endoscopist-reported polyp tissue remaining at the end of the initial EMR attempt. Specialist endoscopists were defined as endoscopists who receive tertiary referrals for difficult colonoscopy cases and completed at least 20 EMR colonic polyp resections over the study period. A total of 257 patients with 269 polyps were included in the study. IR occurred in 40 (16%) cases. IR was associated with polyp size ≥ 40 mm [adjusted odds ratio (aOR) 3.31, 95% confidence interval (CI) 1.38-7.93], flat/laterally spreading polyps (aOR 2.61, 95% CI 1.24-5.48), and difficulty lifting the polyp (aOR 11.0, 95% CI 2.66-45.3). A specialist endoscopist performing the initial EMR was protective against IR, even in the setting of risk factors for IR (aOR 0.13, 95% CI 0.04-0.41). IR is associated with polyp size ≥ 40 mm, flat and/or laterally spreading polyps, and difficulty lifting the polyp. A specialist endoscopist initiating the EMR was protective of IR.

  5. High ambient temperature and risk of intestinal obstruction in cystic fibrosis.

    PubMed

    Ooi, Chee Y; Jeyaruban, Christina; Lau, Jasmine; Katz, Tamarah; Matson, Angela; Bell, Scott C; Adams, Susan E; Krishnan, Usha

    2016-04-01

    Distal intestinal obstruction syndrome (DIOS) and constipation in cystic fibrosis (CF) are conditions associated with impaction and/or obstruction by abnormally viscid mucofaecal material within the intestinal lumen. Dehydration has been proposed as a risk factor for DIOS and constipation in CF. The study primarily aimed to determine whether warmer ambient temperature and lower rainfall are risk factors for DIOS and constipation in CF. Hospitalisations for DIOS (incomplete or complete) and/or constipation were retrospectively identified (2000-2012). Genotype, phenotype, temperatures and rainfall data (for the week preceding and season of hospitalisation) were collected. Twenty-seven DIOS (59.3% incomplete; 40.7% complete) and 44 constipation admissions were identified. All admitted patients were pancreatic insufficient. Meconium ileus was significantly more likely in DIOS than constipation (64.7% vs. 33.3%; P = 0.038) and in complete than incomplete DIOS (100% vs. 57.1%; P = 0.04). The maximum temperature of the week before DIOS admission (mean (standard deviation) = 28.0 (5.8) °C) was significantly higher than the maximum temperature of the season of admission (25.2 (3.4) °C; P = 0.002). Similarly, the maximum temperature of the week before hospitalisation for constipation (mean (standard deviation) = 27.9 (6.3) °C) was significantly warmer compared with the season of admission (24.0 (4.1) °C; P < 0.0001). There were no significant differences between levels of rainfall during the week before hospitalisation and the season of admission for both DIOS and constipation. Relatively high ambient temperature may play a role in the pathogenesis of DIOS and constipation in CF. © 2016 The Authors Journal of Paediatrics and Child Health © 2016 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  6. Oversimplifying quantum factoring.

    PubMed

    Smolin, John A; Smith, Graeme; Vargo, Alexander

    2013-07-11

    Shor's quantum factoring algorithm exponentially outperforms known classical methods. Previous experimental implementations have used simplifications dependent on knowing the factors in advance. However, as we show here, all composite numbers admit simplification of the algorithm to a circuit equivalent to flipping coins. The difficulty of a particular experiment therefore depends on the level of simplification chosen, not the size of the number factored. Valid implementations should not make use of the answer sought.

  7. A cluster analysis on road traffic accidents using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Saharan, Sabariah; Baragona, Roberto

    2017-04-01

    The analysis of traffic road accidents is increasingly important because of the accidents cost and public road safety. The availability or large data sets makes the study of factors that affect the frequency and severity accidents are viable. However, the data are often highly unbalanced and overlapped. We deal with the data set of the road traffic accidents recorded in Christchurch, New Zealand, from 2000-2009 with a total of 26440 accidents. The data is in a binary set and there are 50 factors road traffic accidents with four level of severity. We used genetic algorithm for the analysis because we are in the presence of a large unbalanced data set and standard clustering like k-means algorithm may not be suitable for the task. The genetic algorithm based on clustering for unknown K, (GCUK) has been used to identify the factors associated with accidents of different levels of severity. The results provided us with an interesting insight into the relationship between factors and accidents severity level and suggest that the two main factors that contributes to fatal accidents are "Speed greater than 60 km h" and "Did not see other people until it was too late". A comparison with the k-means algorithm and the independent component analysis is performed to validate the results.

  8. Protein Structure Prediction with Evolutionary Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  9. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors

    PubMed Central

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA’s results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems. PMID:28792495

  10. Demonstration of a compiled version of Shor's quantum factoring algorithm using photonic qubits.

    PubMed

    Lu, Chao-Yang; Browne, Daniel E; Yang, Tao; Pan, Jian-Wei

    2007-12-21

    We report an experimental demonstration of a complied version of Shor's algorithm using four photonic qubits. We choose the simplest instance of this algorithm, that is, factorization of N=15 in the case that the period r=2 and exploit a simplified linear optical network to coherently implement the quantum circuits of the modular exponential execution and semiclassical quantum Fourier transformation. During this computation, genuine multiparticle entanglement is observed which well supports its quantum nature. This experiment represents an essential step toward full realization of Shor's algorithm and scalable linear optics quantum computation.

  11. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  12. Incomplete Data in Smart Grid: Treatment of Values in Electric Vehicle Charging Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majipour, Mostafa; Chu, Peter; Gadh, Rajit

    2014-11-03

    In this paper, five imputation methods namely Constant (zero), Mean, Median, Maximum Likelihood, and Multiple Imputation methods have been applied to compensate for missing values in Electric Vehicle (EV) charging data. The outcome of each of these methods have been used as the input to a prediction algorithm to forecast the EV load in the next 24 hours at each individual outlet. The data is real world data at the outlet level from the UCLA campus parking lots. Given the sparsity of the data, both Median and Constant (=zero) imputations improved the prediction results. Since in most missing value casesmore » in our database, all values of that instance are missing, the multivariate imputation methods did not improve the results significantly compared to univariate approaches.« less

  13. Esophageal achalasia: current diagnosis and treatment.

    PubMed

    Schlottmann, Francisco; Patti, Marco G

    2018-05-27

    Esophageal achalasia is a primary esophageal motility disorder of unknown origin, characterized by lack of peristalsis and by incomplete or absent relaxation of the lower esophageal sphincter in response to swallowing. The goal of treatment is to eliminate the functional obstruction at the level of the gastroesophageal junction Areas covered: This comprehensive review will evaluate the current literature, illustrating the diagnostic evaluation and providing an evidence-based treatment algorithm for this disease Expert commentary: Today we have three very effective therapeutic modalities to treat patients with achalasia - pneumatic dilatation, per-oral endoscopic myotomy and laparoscopic Heller myotomy with fundoplication. Treatment should be tailored to the individual patient, in centers where a multidisciplinary approach is available. Esophageal resection should be considered as a last resort for patients who have failed prior therapeutic attempts.

  14. Generation of three-dimensional delaunay meshes from weakly structured and inconsistent data

    NASA Astrophysics Data System (ADS)

    Garanzha, V. A.; Kudryavtseva, L. N.

    2012-03-01

    A method is proposed for the generation of three-dimensional tetrahedral meshes from incomplete, weakly structured, and inconsistent data describing a geometric model. The method is based on the construction of a piecewise smooth scalar function defining the body so that its boundary is the zero isosurface of the function. Such implicit description of three-dimensional domains can be defined analytically or can be constructed from a cloud of points, a set of cross sections, or a "soup" of individual vertices, edges, and faces. By applying Boolean operations over domains, simple primitives can be combined with reconstruction results to produce complex geometric models without resorting to specialized software. Sharp edges and conical vertices on the domain boundary are reproduced automatically without using special algorithms. Refs. 42. Figs. 25.

  15. Discrete geometric analysis of message passing algorithm on graphs

    NASA Astrophysics Data System (ADS)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  16. Logic-Based and Cellular Pharmacodynamic Modeling of Bortezomib Responses in U266 Human Myeloma Cells

    PubMed Central

    Chudasama, Vaishali L.; Ovacik, Meric A.; Abernethy, Darrell R.

    2015-01-01

    Systems models of biological networks show promise for informing drug target selection/qualification, identifying lead compounds and factors regulating disease progression, rationalizing combinatorial regimens, and explaining sources of intersubject variability and adverse drug reactions. However, most models of biological systems are qualitative and are not easily coupled with dynamical models of drug exposure-response relationships. In this proof-of-concept study, logic-based modeling of signal transduction pathways in U266 multiple myeloma (MM) cells is used to guide the development of a simple dynamical model linking bortezomib exposure to cellular outcomes. Bortezomib is a commonly used first-line agent in MM treatment; however, knowledge of the signal transduction pathways regulating bortezomib-mediated cell cytotoxicity is incomplete. A Boolean network model of 66 nodes was constructed that includes major survival and apoptotic pathways and was updated using responses to several chemical probes. Simulated responses to bortezomib were in good agreement with experimental data, and a reduction algorithm was used to identify key signaling proteins. Bortezomib-mediated apoptosis was not associated with suppression of nuclear factor κB (NFκB) protein inhibition in this cell line, which contradicts a major hypothesis of bortezomib pharmacodynamics. A pharmacodynamic model was developed that included three critical proteins (phospho-NFκB, BclxL, and cleaved poly (ADP ribose) polymerase). Model-fitted protein dynamics and cell proliferation profiles agreed with experimental data, and the model-predicted IC50 (3.5 nM) is comparable to the experimental value (1.5 nM). The cell-based pharmacodynamic model successfully links bortezomib exposure to MM cellular proliferation via protein dynamics, and this model may show utility in exploring bortezomib-based combination regimens. PMID:26163548

  17. Risk Analyses of Pressure Ulcer in Tetraplegic Spinal Cord-Injured Persons: A French Long-Term Survey.

    PubMed

    Le Fort, Marc; Espagnacq, Maude; Perrouin-Verbe, Brigitte; Ravaud, Jean-François

    2017-09-01

    To identify the long-term clinical, individual, and social risk factors for the development of pressure ulcers (PUs) in traumatic spinal cord-injured persons with tetraplegia (TSCIt). Cohort survey with self-applied questionnaires in 1995 and 2006. Thirty-five French-speaking European physical medicine and rehabilitation centers participating in the Tetrafigap surveys. Tetraplegic adults (N=1641) were surveyed after an initial posttraumatic period of at least 2 years. Eleven years later, a follow-up was done for 1327 TSCIt, among whom 221 had died and 547 could be surveyed again. Not applicable. The proportion of PUs documented at the various defined time points, relative to the medical and social situations of the TSCIt, by using univariate analyses followed by logistic regression. Of the participants, 73.4% presented with a PU during at least 1 period after their injury. Four factors had an effect on the occurrence of PUs in the long-term. Protective features for this population were incomplete motor impairment (odds ratio, 0.5) and the ability to walk (odds ratio, 0.2), whereas a strong predictive factor was the development of a PU during the initial posttrauma phase (odds ratio, 2.7). Finally, a significant situational factor was the lack of a social network (odds ratio, 3.1). We believe that the highlighting of a motor incomplete feature of SCI (protective against the development of a PU) and of a medical risk factor, an early PU (which served as a definitive marker of the trajectory of TSCIt), together with a social situational factor, indicates the crucial role of initial management and long-term follow-up. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. ROBNCA: robust network component analysis for recovering transcription factor activities.

    PubMed

    Noor, Amina; Ahmad, Aitzaz; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem

    2013-10-01

    Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA

  19. Interpreting Chromosome Aberration Spectra

    NASA Technical Reports Server (NTRS)

    Levy, Dan; Reeder, Christopher; Loucas, Bradford; Hlatky, Lynn; Chen, Allen; Cornforth, Michael; Sachs, Rainer

    2007-01-01

    Ionizing radiation can damage cells by breaking both strands of DNA in multiple locations, essentially cutting chromosomes into pieces. The cell has enzymatic mechanisms to repair such breaks; however, these mechanisms are imperfect and, in an exchange process, may produce a large-scale rearrangement of the genome, called a chromosome aberration. Chromosome aberrations are important in killing cells, during carcinogenesis, in characterizing repair/misrepair pathways, in retrospective radiation biodosimetry, and in a number of other ways. DNA staining techniques such as mFISH ( multicolor fluorescent in situ hybridization) provide a means for analyzing aberration spectra by examining observed final patterns. Unfortunately, an mFISH observed final pattern often does not uniquely determine the underlying exchange process. Further, resolution limitations in the painting protocol sometimes lead to apparently incomplete final patterns. We here describe an algorithm for systematically finding exchange processes consistent with any observed final pattern. This algorithm uses aberration multigraphs, a mathematical formalism that links the various aspects of aberration formation. By applying a measure to the space of consistent multigraphs, we will show how to generate model-specific distributions of aberration processes from mFISH experimental data. The approach is implemented by software freely available over the internet. As a sample application, we apply these algorithms to an aberration data set, obtaining a distribution of exchange cycle sizes, which serves to measure aberration complexity. Estimating complexity, in turn, helps indicate how damaging the aberrations are and may facilitate identification of radiation type in retrospective biodosimetry.

  20. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  1. Development of a Compton camera for prompt-gamma medical imaging

    NASA Astrophysics Data System (ADS)

    Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.

    2017-11-01

    A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.

  2. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  3. PCM-SABRE: a platform for benchmarking and comparing outcome prediction methods in precision cancer medicine.

    PubMed

    Eyal-Altman, Noah; Last, Mark; Rubin, Eitan

    2017-01-17

    Numerous publications attempt to predict cancer survival outcome from gene expression data using machine-learning methods. A direct comparison of these works is challenging for the following reasons: (1) inconsistent measures used to evaluate the performance of different models, and (2) incomplete specification of critical stages in the process of knowledge discovery. There is a need for a platform that would allow researchers to replicate previous works and to test the impact of changes in the knowledge discovery process on the accuracy of the induced models. We developed the PCM-SABRE platform, which supports the entire knowledge discovery process for cancer outcome analysis. PCM-SABRE was developed using KNIME. By using PCM-SABRE to reproduce the results of previously published works on breast cancer survival, we define a baseline for evaluating future attempts to predict cancer outcome with machine learning. We used PCM-SABRE to replicate previous work that describe predictive models of breast cancer recurrence, and tested the performance of all possible combinations of feature selection methods and data mining algorithms that was used in either of the works. We reconstructed the work of Chou et al. observing similar trends - superior performance of Probabilistic Neural Network (PNN) and logistic regression (LR) algorithms and inconclusive impact of feature pre-selection with the decision tree algorithm on subsequent analysis. PCM-SABRE is a software tool that provides an intuitive environment for rapid development of predictive models in cancer precision medicine.

  4. Security Data Warehouse Application

    NASA Technical Reports Server (NTRS)

    Vernon, Lynn R.; Hennan, Robert; Ortiz, Chris; Gonzalez, Steve; Roane, John

    2012-01-01

    The Security Data Warehouse (SDW) is used to aggregate and correlate all JSC IT security data. This includes IT asset inventory such as operating systems and patch levels, users, user logins, remote access dial-in and VPN, and vulnerability tracking and reporting. The correlation of this data allows for an integrated understanding of current security issues and systems by providing this data in a format that associates it to an individual host. The cornerstone of the SDW is its unique host-mapping algorithm that has undergone extensive field tests, and provides a high degree of accuracy. The algorithm comprises two parts. The first part employs fuzzy logic to derive a best-guess host assignment using incomplete sensor data. The second part is logic to identify and correct errors in the database, based on subsequent, more complete data. Host records are automatically split or merged, as appropriate. The process had to be refined and thoroughly tested before the SDW deployment was feasible. Complexity was increased by adding the dimension of time. The SDW correlates all data with its relationship to time. This lends support to forensic investigations, audits, and overall situational awareness. Another important feature of the SDW architecture is that all of the underlying complexities of the data model and host-mapping algorithm are encapsulated in an easy-to-use and understandable Perl language Application Programming Interface (API). This allows the SDW to be quickly augmented with additional sensors using minimal coding and testing. It also supports rapid generation of ad hoc reports and integration with other information systems.

  5. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  6. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  7. Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.

    PubMed

    He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej

    2011-12-01

    Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.

  8. Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection.

    PubMed

    Doyle, Mathew P; Li, Annette; Villanueva, Claudia I; Peeceeyen, Sheen C S; Cooper, Michael G; Hanel, Kevin C; Fermanis, Gary G; Robertson, Greg

    2015-01-01

    Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE.

  9. Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection

    PubMed Central

    Doyle, Mathew P.; Li, Annette; Villanueva, Claudia I.; Peeceeyen, Sheen C. S.; Cooper, Michael G.; Hanel, Kevin C.; Fermanis, Gary G.; Robertson, Greg

    2015-01-01

    Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE. PMID:26783463

  10. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  11. Spinal cord injuries functional rehabilitation - Traditional approaches and new strategies in physiotherapy.

    PubMed

    de Almeida, Patrícia Maria Duarte

    2006-02-01

    Considering the body structures and systems loss of function, after a Spinal Cord Injury, with is respective activities limitations and social participation restriction, the rehabilitation process goals are to achieve the maximal functional independence and quality of life allowed by the clinical lesion. For this is necessary a rehabilitation period with a rehabilitation team, including the physiotherapist whose interventions will depend on factors such degree of completeness or incompleteness and patient clinical stage. Physiotherapy approach includes several procedures and techniques related with a traditional model or with the recent perspective of neuronal regeneration. Following a traditional model, the interventions in complete A and incomplete B lesions, is based on compensatory method of functional rehabilitation using the non affected muscles. In the incomplete C and D lesions, motor re-education below the lesion, using key points to facilitate normal and selective patterns of movement is preferable. In other way if the neuronal regeneration is possible with respective function improve; the physiotherapy approach goals are to maintain muscular trofism and improve the recruitment of motor units using intensive techniques. In both, there is no scientific evidence to support the procedures, exists a lack of investigation and most of the research are methodologically poor. © 2006 Sociedade Portuguesa de Pneumologia/SPP.

  12. Completing the physical representation of quantum algorithms provides a retrocausal explanation of the speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2017-05-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete as it lacks the initial measurement. We extend it to the process of setting the problem. An initial measurement selects a problem setting at random, and a unitary transformation sends it into the desired setting. The extended representation must be with respect to Bob, the problem setter, and any external observer. It cannot be with respect to Alice, the problem solver. It would tell her the problem setting and thus the solution of the problem implicit in it. In the representation to Alice, the projection of the quantum state due to the initial measurement should be postponed until the end of the quantum algorithm. In either representation, there is a unitary transformation between the initial and final measurement outcomes. As a consequence, the final measurement of any ℛ-th part of the solution could select back in time a corresponding part of the random outcome of the initial measurement; the associated projection of the quantum state should be advanced by the inverse of that unitary transformation. This, in the representation to Alice, would tell her, before she begins her problem solving action, that part of the solution. The quantum algorithm should be seen as a sum over classical histories in each of which Alice knows in advance one of the possible ℛ-th parts of the solution and performs the oracle queries still needed to find it - this for the value of ℛ that explains the algorithm's speedup. We have a relation between retrocausality ℛ and the number of oracle queries needed to solve an oracle problem quantumly. All the oracle problems examined can be solved with any value of ℛ up to an upper bound attained by the optimal quantum algorithm. This bound is always in the vicinity of 1/2 . Moreover, ℛ =1/2 always provides the order of magnitude of the number of queries needed to solve the problem in an optimal quantum way. If this were true for any oracle problem, as plausible, it would solve the quantum query complexity problem.

  13. Adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Tianzhen; Wang, Xiumei; Gao, Xinbo

    2018-04-01

    Nowadays, several datasets are demonstrated by multi-view, which usually include shared and complementary information. Multi-view clustering methods integrate the information of multi-view to obtain better clustering results. Nonnegative matrix factorization has become an essential and popular tool in clustering methods because of its interpretation. However, existing nonnegative matrix factorization based multi-view clustering algorithms do not consider the disagreement between views and neglects the fact that different views will have different contributions to the data distribution. In this paper, we propose a new multi-view clustering method, named adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization. The proposed algorithm can obtain the parts-based representation of multi-view data by nonnegative matrix factorization. Then, pairwise co-regularization is used to measure the disagreement between views. There is only one parameter to auto learning the weight values according to the contribution of each view to data distribution. Experimental results show that the proposed algorithm outperforms several state-of-the-arts algorithms for multi-view clustering.

  14. Computing row and column counts for sparse QR and LU factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.

    2001-01-01

    We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less

  15. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  16. Alzheimer's disease master regulators analysis: search for potential molecular targets and drug repositioning candidates.

    PubMed

    Vargas, D M; De Bastiani, M A; Zimmer, E R; Klamt, F

    2018-06-23

    Alzheimer's disease (AD) is a multifactorial and complex neuropathology that involves impairment of many intricate molecular mechanisms. Despite recent advances, AD pathophysiological characterization remains incomplete, which hampers the development of effective treatments. In fact, currently, there are no effective pharmacological treatments for AD. Integrative strategies such as transcription regulatory network and master regulator analyses exemplify promising new approaches to study complex diseases and may help in the identification of potential pharmacological targets. In this study, we used transcription regulatory network and master regulator analyses on transcriptomic data of human hippocampus to identify transcription factors (TFs) that can potentially act as master regulators in AD. All expression profiles were obtained from the Gene Expression Omnibus database using the GEOquery package. A normal hippocampus transcription factor-centered regulatory network was reconstructed using the ARACNe algorithm. Master regulator analysis and two-tail gene set enrichment analysis were employed to evaluate the inferred regulatory units in AD case-control studies. Finally, we used a connectivity map adaptation to prospect new potential therapeutic interventions by drug repurposing. We identified TFs with already reported involvement in AD, such as ATF2 and PARK2, as well as possible new targets for future investigations, such as CNOT7, CSRNP2, SLC30A9, and TSC22D1. Furthermore, Connectivity Map Analysis adaptation suggested the repositioning of six FDA-approved drugs that can potentially modulate master regulator candidate regulatory units (Cefuroxime, Cyproterone, Dydrogesterone, Metrizamide, Trimethadione, and Vorinostat). Using a transcription factor-centered regulatory network reconstruction we were able to identify several potential molecular targets and six drug candidates for repositioning in AD. Our study provides further support for the use of bioinformatics tools as exploratory strategies in neurodegenerative diseases research, and also provides new perspectives on molecular targets and drug therapies for future investigation and validation in AD.

  17. Recursive dynamics for flexible multibody systems using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1990-01-01

    Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described here permits a user to choose the algorithm which is optimal for the multibody system at hand. The availability of a number of algorithms is even more important for real-time applications, where implementation on parallel processors or custom computing hardware is often necessary to maximize speed.

  18. Cockpit resources management and the theory of the situation

    NASA Technical Reports Server (NTRS)

    Bolman, L.

    1984-01-01

    The cockpit resource management (CRM) and hypothetical cockpit situations are discussed. Four different conditions which influence pilot action are outlined: (1) wrong assumptions about a situation; (2) stress and workload; (3) frustration and delays to cause risk taking; and (4) ambigious incomplete or contradicting information. Human factors and behavior, and pilot communication and management in the simulator are outlined.

  19. Predictors of Criminal Charges for Youth in Public Mental Health during the Transition to Adulthood

    ERIC Educational Resources Information Center

    Pullmann, M. D.

    2010-01-01

    Dual involvement with the mental health system and justice system is relatively frequent for young adults with mental health problems, yet the research on factors predictive of dual involvement is incomplete. This study extends past research on predictors of criminal charges for people in the public mental health system in four ways. First, this…

  20. RECEPTOR MODELING OF AMBIENT PARTICULATE MATTER DATA USING POSITIVE MATRIX FACTORIZATION REVIEW OF EXISTING METHODS

    EPA Science Inventory

    Methods for apportioning sources of ambient particulate matter (PM) using the positive matrix factorization (PMF) algorithm are reviewed. Numerous procedural decisions must be made and algorithmic parameters selected when analyzing PM data with PMF. However, few publications docu...

  1. Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    Item factor analysis (IFA), already well established in educational measurement, is increasingly applied to psychological measurement in research settings. However, high-dimensional confirmatory IFA remains a numerical challenge. The current research extends the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm, initially proposed for…

  2. Efficient algorithms for computing a strong rank-revealing QR factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, M.; Eisenstat, S.C.

    1996-07-01

    Given an m x n matrix M with m {ge} n, it is shown that there exists a permutation {Pi} and an integer k such that the QR factorization given by equation (1) reveals the numerical rank of M: the k x k upper-triangular matrix A{sub k} is well conditioned, norm of (C{sub k}){sub 2} is small, and B{sub k} is linearly dependent on A{sub k} with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficientmore » as QR with column pivoting for most problems and take O(mn{sup 2}) floating-point operations in the worst case.« less

  3. Tree-Structured Infinite Sparse Factor Model

    PubMed Central

    Zhang, XianXing; Dunson, David B.; Carin, Lawrence

    2013-01-01

    A tree-structured multiplicative gamma process (TMGP) is developed, for inferring the depth of a tree-based factor-analysis model. This new model is coupled with the nested Chinese restaurant process, to nonparametrically infer the depth and width (structure) of the tree. In addition to developing the model, theoretical properties of the TMGP are addressed, and a novel MCMC sampler is developed. The structure of the inferred tree is used to learn relationships between high-dimensional data, and the model is also applied to compressive sensing and interpolation of incomplete images. PMID:25279389

  4. Pulsar statistics and their interpretations

    NASA Technical Reports Server (NTRS)

    Arnett, W. D.; Lerche, I.

    1981-01-01

    It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.

  5. Shin splints. Diagnosis, management, prevention.

    PubMed

    Moore, M P

    1988-01-01

    Our knowledge of the etiology of shin splints is incomplete. Biomechanical abnormalities are likely to be major factors in predisposing certain persons to such injury. Also, training errors are major etiologic factors. Because shin splints result from mechanical overload of various elements of the musculoskeletal system of the leg that exceed their adaptive remodeling capacity, rest and recovery should be emphasized as an important aspect of sports training. Accurate and prompt diagnosis reduces the severity and duration of the injury. Management should consist of measures to reduce inflammation and pain and to identify possible biomechanical factors that may be correctable by strengthening and flexibility exercises or by the use of an orthotic device.

  6. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  7. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  8. The Nature of Self-Regulatory Fatigue and "Ego Depletion": Lessons From Physical Fatigue.

    PubMed

    Evans, Daniel R; Boggero, Ian A; Segerstrom, Suzanne C

    2015-07-30

    Self-regulation requires overriding a dominant response and leads to temporary self-regulatory fatigue. Existing theories of the nature and causes of self-regulatory fatigue highlight physiological substrates such as glucose, or psychological processes such as motivation, but these explanations are incomplete on their own. Historically, theories of physical fatigue demonstrate a similar pattern of useful but incomplete explanations, as recent views of physical fatigue emphasize the roles of both physiological and psychological factors. In addition to accounting for multiple inputs, these newer views also explain how fatigue can occur even in the presence of sufficient resources. Examining these newer theories of physical fatigue can serve as a foundation on which to build a more comprehensive understanding of self-regulatory fatigue that integrates possible neurobiological underpinnings of physical and self-regulatory fatigue, and suggests the possible function of self-regulatory fatigue. © 2015 by the Society for Personality and Social Psychology, Inc.

  9. The nature of self-regulatory fatigue and “ego depletion”: Lessons from physical fatigue

    PubMed Central

    Evans, Daniel R.; Boggero, Ian A.; Segerstrom, Suzanne C.

    2016-01-01

    Self-regulation requires overriding a dominant response, and leads to temporary self-regulatory fatigue. Existing theories of the nature and causes of self-regulatory fatigue highlight physiological substrates such as glucose or psychological processes such as motivation, but these explanations are incomplete on their own. Historically, theories of physical fatigue demonstrate a similar pattern of useful but incomplete explanations, as recent views of physical fatigue emphasize the roles of both physiological and psychological factors. In addition to accounting for multiple inputs, these newer views also explain how fatigue can occur even in the presence of sufficient resources. Examining these newer theories of physical fatigue can serve as a foundation on which to build a more comprehensive understanding of self-regulatory fatigue that integrates possible neurobiological underpinnings of physical and self-regulatory fatigue, and suggests the possible function of self-regulatory fatigue. PMID:26228914

  10. A Study of Incomplete Abortion Following Medical Method of Abortion (MMA).

    PubMed

    Pawde, Anuya A; Ambadkar, Arun; Chauhan, Anahita R

    2016-08-01

    Medical method of abortion (MMA) is a safe, efficient, and affordable method of abortion. However, incomplete abortion is a known side effect. To study incomplete abortion due to medication abortion and compare to spontaneous incomplete abortion and to study referral practices and prescriptions in cases of incomplete abortion following MMA. Prospective observational study of 100 women with first trimester incomplete abortion, divided into two groups (spontaneous or following MMA), was administered a questionnaire which included information regarding onset of bleeding, treatment received, use of medications for abortion, its prescription, and administration. Comparison of two groups was done using Fisher exact test (SPSS 21.0 software). Thirty percent of incomplete abortions were seen following MMA; possible reasons being self-administration or prescription by unregistered practitioners, lack of examination, incorrect dosage and drugs, and lack of follow-up. Complications such as collapse, blood requirement, and fever were significantly higher in these patients compared to spontaneous abortion group. The side effects of incomplete abortions following MMA can be avoided by the following standard guidelines. Self medication, over- the-counter use, and prescription by unregistered doctors should be discouraged and reported, and need of follow-up should be emphasized.

  11. Estimate of true incomplete exchanges using fluorescence in situ hybridization with telomere probes

    NASA Technical Reports Server (NTRS)

    Wu, H.; George, K.; Yang, T. C.

    1998-01-01

    PURPOSE: To study the frequency of true incomplete exchanges in radiation-induced chromosome aberrations. MATERIALS AND METHODS: Human lymphocytes were exposed to 2 Gy and 5 Gy of gamma-rays. Chromosome aberrations were studied using the fluorescence in situ hybridization (FISH) technique with whole chromosome-specific probes, together with human telomere probes. Chromosomes 2 and 4 were chosen in the present study. RESULTS: The percentage of incomplete exchanges was 27% when telomere signals were not considered. After excluding false incomplete exchanges identified by the telomere signals, the percentage of incomplete exchanges decreased to 11%. Since telomere signals appear on about 82% of the telomeres, the percentage of true incomplete exchanges should be even lower and was estimated to be 3%. This percentage was similar for chromosomes 2 and 4 and for doses of both 2 Gy and 5 Gy. CONCLUSIONS: The percentage of true incomplete exchanges is significantly lower in gamma-irradiated human lymphocytes than the frequencies reported in the literature.

  12. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  13. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    PubMed

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  14. A Prospective Phase 2 Multicenter Study for the Efficacy of Radiation Therapy Following Incomplete Transarterial Chemoembolization in Unresectable Hepatocellular Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Chihwan; Koom, Woong Sub; Kim, Tae Hyun

    2014-12-01

    Purpose: The purpose of this study was to investigate the efficacy and toxicity of radiation therapy (RT) following incomplete transarterial chemoembolization (TACE) in unresectable hepatocellular carcinoma (HCC). Methods and Materials: The study was designed as a prospective phase 2 multicenter trial. Patients with unresectable HCC, who had viable tumor after TACE of no more than 3 courses, were eligible. Three-dimensional conformal RT (3D-CRT) was added for HCC treatment with incomplete uptake of iodized oil, and the interval from TACE to RT was 4 to 6 weeks. The primary endpoint of this study was the tumor response after RT following incomplete TACEmore » in unresectable HCC. Secondary endpoints were patterns of failure, progression-free survival (PFS), time to tumor progression (TTP), overall survival (OS) rates at 2 years, and treatment-associated toxicity. Survival was calculated from the start of RT. Results: Between August 2008 and December 2010, 31 patients were enrolled. RT was delivered at a median dose of 54 Gy (range, 46-59.4 Gy) at 1.8 to 2 Gy per fraction. A best objective in-field response rate was achieved in 83.9% of patients, with complete response (CR) in 22.6% of patients and partial response in 61.3% of patients within 12 weeks post-RT. A best objective overall response rate was achieved in 64.5% of patients with CR in 19.4% of patients and PR in 45.1% of patients. The 2-year in-field PFS, PFS, TTP, and OS rates were 45.2%, 29.0%, 36.6%, and 61.3%, respectively. The Barcelona Clinic liver cancer stage was a significant independent prognostic factor for PFS (P=.023). Classic radiation-induced liver disease was not observed. There were no treatment-related deaths or hepatic failure. Conclusions: Early 3D-CRT following incomplete TACE is a safe and practical treatment option for patients with unresectable HCC.« less

  15. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  16. Limitations and potentials of current motif discovery algorithms

    PubMed Central

    Hu, Jianjun; Li, Bin; Kihara, Daisuke

    2005-01-01

    Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194

  17. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  18. A national physician survey of diagnostic error in paediatrics.

    PubMed

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  19. Accounting of fundamental components of the rotation parameters of the Earth in the formation of a high-accuracy orbit of navigation satellites

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Mikhailov, M. V.; Pochukaev, V. N.

    2012-07-01

    An analysis of perturbing factors influencing the motion of a navigation satellite (NS) is carried out, and the degree of influence of each factor on the GLONASS orbit is estimated. It is found that fundamental components of the Earth's rotation parameters (ERP) are one substantial factor commensurable with maximum perturbations. Algorithms for the calculation of orbital perturbations caused by these parameters are given; these algorithms can be implemented in a consumer's equipment. The daily prediction of NS coordinates is performed on the basis of real GLONASS satellite ephemerides transmitted to a consumer, using the developed prediction algorithms taking the ERP into account. The obtained accuracy of the daily prediction of GLONASS ephemerides exceeds by tens of times the accuracy of the daily prediction performed using algorithms recommended in interface control documents.

  20. Implicit flux-split schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Walters, R. W.; Van Leer, B.

    1985-01-01

    Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.

Top